00:00:00.001 Started by upstream project "autotest-per-patch" build number 131187 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.011 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:05.259 The recommended git tool is: git 00:00:05.259 using credential 00000000-0000-0000-0000-000000000002 00:00:05.262 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:05.276 Fetching changes from the remote Git repository 00:00:05.279 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:05.294 Using shallow fetch with depth 1 00:00:05.294 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:05.294 > git --version # timeout=10 00:00:05.307 > git --version # 'git version 2.39.2' 00:00:05.307 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:05.321 Setting http proxy: proxy-dmz.intel.com:911 00:00:05.321 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:11.077 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:11.090 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:11.103 Checking out Revision 3f5fbcceba25866ebf7e22fd0e5d30548272f62c (FETCH_HEAD) 00:00:11.103 > git config core.sparsecheckout # timeout=10 00:00:11.116 > git read-tree -mu HEAD # timeout=10 00:00:11.132 > git checkout -f 3f5fbcceba25866ebf7e22fd0e5d30548272f62c # timeout=5 00:00:11.152 Commit message: "packer: Bump java's version" 00:00:11.152 > git rev-list --no-walk 3f5fbcceba25866ebf7e22fd0e5d30548272f62c # timeout=10 00:00:11.238 [Pipeline] Start of Pipeline 00:00:11.250 [Pipeline] library 00:00:11.252 Loading library shm_lib@master 00:00:11.252 Library shm_lib@master is cached. Copying from home. 00:00:11.270 [Pipeline] node 00:00:11.277 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:11.278 [Pipeline] { 00:00:11.289 [Pipeline] catchError 00:00:11.290 [Pipeline] { 00:00:11.302 [Pipeline] wrap 00:00:11.311 [Pipeline] { 00:00:11.319 [Pipeline] stage 00:00:11.321 [Pipeline] { (Prologue) 00:00:11.342 [Pipeline] echo 00:00:11.344 Node: VM-host-SM17 00:00:11.352 [Pipeline] cleanWs 00:00:11.363 [WS-CLEANUP] Deleting project workspace... 00:00:11.363 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.369 [WS-CLEANUP] done 00:00:11.548 [Pipeline] setCustomBuildProperty 00:00:11.625 [Pipeline] httpRequest 00:00:11.991 [Pipeline] echo 00:00:11.993 Sorcerer 10.211.164.101 is alive 00:00:12.002 [Pipeline] retry 00:00:12.004 [Pipeline] { 00:00:12.018 [Pipeline] httpRequest 00:00:12.022 HttpMethod: GET 00:00:12.022 URL: http://10.211.164.101/packages/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:12.023 Sending request to url: http://10.211.164.101/packages/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:12.027 Response Code: HTTP/1.1 200 OK 00:00:12.027 Success: Status code 200 is in the accepted range: 200,404 00:00:12.028 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:35.975 [Pipeline] } 00:00:35.992 [Pipeline] // retry 00:00:36.000 [Pipeline] sh 00:00:36.290 + tar --no-same-owner -xf jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:36.306 [Pipeline] httpRequest 00:00:36.699 [Pipeline] echo 00:00:36.700 Sorcerer 10.211.164.101 is alive 00:00:36.709 [Pipeline] retry 00:00:36.711 [Pipeline] { 00:00:36.726 [Pipeline] httpRequest 00:00:36.730 HttpMethod: GET 00:00:36.731 URL: http://10.211.164.101/packages/spdk_aa3f30c36c5bea0b4877a5500b8113b462e9c2cb.tar.gz 00:00:36.731 Sending request to url: http://10.211.164.101/packages/spdk_aa3f30c36c5bea0b4877a5500b8113b462e9c2cb.tar.gz 00:00:36.735 Response Code: HTTP/1.1 200 OK 00:00:36.735 Success: Status code 200 is in the accepted range: 200,404 00:00:36.736 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_aa3f30c36c5bea0b4877a5500b8113b462e9c2cb.tar.gz 00:05:33.581 [Pipeline] } 00:05:33.599 [Pipeline] // retry 00:05:33.607 [Pipeline] sh 00:05:33.887 + tar --no-same-owner -xf spdk_aa3f30c36c5bea0b4877a5500b8113b462e9c2cb.tar.gz 00:05:37.212 [Pipeline] sh 00:05:37.490 + git -C spdk log --oneline -n5 00:05:37.490 aa3f30c36 nvme/perf: interrupt mode support for pcie controller 00:05:37.490 eb4fb2f08 bdev/nvme: interrupt mode for PCIe transport 00:05:37.490 35c8daa94 nvme/poll_group: create and manage fd_group for nvme poll group 00:05:37.490 0ea3371f3 thread: Extended options for spdk_interrupt_register 00:05:37.490 e85295127 util: fix total fds to wait for 00:05:37.508 [Pipeline] writeFile 00:05:37.523 [Pipeline] sh 00:05:37.803 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:37.814 [Pipeline] sh 00:05:38.094 + cat autorun-spdk.conf 00:05:38.094 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:38.094 SPDK_RUN_ASAN=1 00:05:38.094 SPDK_RUN_UBSAN=1 00:05:38.094 SPDK_TEST_RAID=1 00:05:38.094 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:38.101 RUN_NIGHTLY=0 00:05:38.104 [Pipeline] } 00:05:38.118 [Pipeline] // stage 00:05:38.132 [Pipeline] stage 00:05:38.134 [Pipeline] { (Run VM) 00:05:38.147 [Pipeline] sh 00:05:38.427 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:38.427 + echo 'Start stage prepare_nvme.sh' 00:05:38.427 Start stage prepare_nvme.sh 00:05:38.427 + [[ -n 0 ]] 00:05:38.427 + disk_prefix=ex0 00:05:38.427 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:05:38.427 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:05:38.427 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:05:38.427 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:38.427 ++ SPDK_RUN_ASAN=1 00:05:38.427 ++ SPDK_RUN_UBSAN=1 00:05:38.427 ++ SPDK_TEST_RAID=1 00:05:38.427 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:38.427 ++ RUN_NIGHTLY=0 00:05:38.427 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:05:38.427 + nvme_files=() 00:05:38.427 + declare -A nvme_files 00:05:38.427 + backend_dir=/var/lib/libvirt/images/backends 00:05:38.427 + nvme_files['nvme.img']=5G 00:05:38.427 + nvme_files['nvme-cmb.img']=5G 00:05:38.427 + nvme_files['nvme-multi0.img']=4G 00:05:38.427 + nvme_files['nvme-multi1.img']=4G 00:05:38.427 + nvme_files['nvme-multi2.img']=4G 00:05:38.427 + nvme_files['nvme-openstack.img']=8G 00:05:38.427 + nvme_files['nvme-zns.img']=5G 00:05:38.427 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:38.427 + (( SPDK_TEST_FTL == 1 )) 00:05:38.427 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:38.427 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:38.427 + for nvme in "${!nvme_files[@]}" 00:05:38.427 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:05:38.427 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:38.427 + for nvme in "${!nvme_files[@]}" 00:05:38.427 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:05:38.427 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:38.427 + for nvme in "${!nvme_files[@]}" 00:05:38.427 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:05:38.427 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:38.428 + for nvme in "${!nvme_files[@]}" 00:05:38.428 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:05:38.428 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:38.428 + for nvme in "${!nvme_files[@]}" 00:05:38.428 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:05:38.428 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:38.428 + for nvme in "${!nvme_files[@]}" 00:05:38.428 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:05:38.428 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:38.428 + for nvme in "${!nvme_files[@]}" 00:05:38.428 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:05:38.428 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:38.428 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:05:38.428 + echo 'End stage prepare_nvme.sh' 00:05:38.428 End stage prepare_nvme.sh 00:05:38.437 [Pipeline] sh 00:05:38.714 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:38.714 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:05:38.714 00:05:38.714 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:05:38.714 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:05:38.714 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:05:38.714 HELP=0 00:05:38.714 DRY_RUN=0 00:05:38.714 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:05:38.714 NVME_DISKS_TYPE=nvme,nvme, 00:05:38.714 NVME_AUTO_CREATE=0 00:05:38.714 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:05:38.714 NVME_CMB=,, 00:05:38.714 NVME_PMR=,, 00:05:38.714 NVME_ZNS=,, 00:05:38.714 NVME_MS=,, 00:05:38.714 NVME_FDP=,, 00:05:38.714 SPDK_VAGRANT_DISTRO=fedora39 00:05:38.714 SPDK_VAGRANT_VMCPU=10 00:05:38.714 SPDK_VAGRANT_VMRAM=12288 00:05:38.714 SPDK_VAGRANT_PROVIDER=libvirt 00:05:38.714 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:05:38.714 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:38.714 SPDK_OPENSTACK_NETWORK=0 00:05:38.714 VAGRANT_PACKAGE_BOX=0 00:05:38.714 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:05:38.714 FORCE_DISTRO=true 00:05:38.714 VAGRANT_BOX_VERSION= 00:05:38.714 EXTRA_VAGRANTFILES= 00:05:38.714 NIC_MODEL=e1000 00:05:38.714 00:05:38.714 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:05:38.714 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:05:42.011 Bringing machine 'default' up with 'libvirt' provider... 00:05:42.578 ==> default: Creating image (snapshot of base box volume). 00:05:42.836 ==> default: Creating domain with the following settings... 00:05:42.836 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728983126_76ad32691d7b935a8724 00:05:42.836 ==> default: -- Domain type: kvm 00:05:42.836 ==> default: -- Cpus: 10 00:05:42.837 ==> default: -- Feature: acpi 00:05:42.837 ==> default: -- Feature: apic 00:05:42.837 ==> default: -- Feature: pae 00:05:42.837 ==> default: -- Memory: 12288M 00:05:42.837 ==> default: -- Memory Backing: hugepages: 00:05:42.837 ==> default: -- Management MAC: 00:05:42.837 ==> default: -- Loader: 00:05:42.837 ==> default: -- Nvram: 00:05:42.837 ==> default: -- Base box: spdk/fedora39 00:05:42.837 ==> default: -- Storage pool: default 00:05:42.837 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728983126_76ad32691d7b935a8724.img (20G) 00:05:42.837 ==> default: -- Volume Cache: default 00:05:42.837 ==> default: -- Kernel: 00:05:42.837 ==> default: -- Initrd: 00:05:42.837 ==> default: -- Graphics Type: vnc 00:05:42.837 ==> default: -- Graphics Port: -1 00:05:42.837 ==> default: -- Graphics IP: 127.0.0.1 00:05:42.837 ==> default: -- Graphics Password: Not defined 00:05:42.837 ==> default: -- Video Type: cirrus 00:05:42.837 ==> default: -- Video VRAM: 9216 00:05:42.837 ==> default: -- Sound Type: 00:05:42.837 ==> default: -- Keymap: en-us 00:05:42.837 ==> default: -- TPM Path: 00:05:42.837 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:42.837 ==> default: -- Command line args: 00:05:42.837 ==> default: -> value=-device, 00:05:42.837 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:42.837 ==> default: -> value=-drive, 00:05:42.837 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:05:42.837 ==> default: -> value=-device, 00:05:42.837 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:42.837 ==> default: -> value=-device, 00:05:42.837 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:42.837 ==> default: -> value=-drive, 00:05:42.837 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:05:42.837 ==> default: -> value=-device, 00:05:42.837 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:42.837 ==> default: -> value=-drive, 00:05:42.837 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:05:42.837 ==> default: -> value=-device, 00:05:42.837 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:42.837 ==> default: -> value=-drive, 00:05:42.837 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:05:42.837 ==> default: -> value=-device, 00:05:42.837 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:43.096 ==> default: Creating shared folders metadata... 00:05:43.096 ==> default: Starting domain. 00:05:44.475 ==> default: Waiting for domain to get an IP address... 00:06:06.394 ==> default: Waiting for SSH to become available... 00:06:06.394 ==> default: Configuring and enabling network interfaces... 00:06:08.935 default: SSH address: 192.168.121.152:22 00:06:08.935 default: SSH username: vagrant 00:06:08.935 default: SSH auth method: private key 00:06:10.853 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:18.981 ==> default: Mounting SSHFS shared folder... 00:06:20.882 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:20.882 ==> default: Checking Mount.. 00:06:21.814 ==> default: Folder Successfully Mounted! 00:06:21.814 ==> default: Running provisioner: file... 00:06:22.750 default: ~/.gitconfig => .gitconfig 00:06:23.316 00:06:23.316 SUCCESS! 00:06:23.316 00:06:23.316 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:06:23.316 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:23.316 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:06:23.316 00:06:23.325 [Pipeline] } 00:06:23.339 [Pipeline] // stage 00:06:23.347 [Pipeline] dir 00:06:23.348 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:06:23.349 [Pipeline] { 00:06:23.362 [Pipeline] catchError 00:06:23.364 [Pipeline] { 00:06:23.377 [Pipeline] sh 00:06:23.655 + vagrant ssh-config --host vagrant 00:06:23.655 + sed -ne /^Host/,$p 00:06:23.655 + tee ssh_conf 00:06:27.912 Host vagrant 00:06:27.912 HostName 192.168.121.152 00:06:27.912 User vagrant 00:06:27.912 Port 22 00:06:27.912 UserKnownHostsFile /dev/null 00:06:27.912 StrictHostKeyChecking no 00:06:27.912 PasswordAuthentication no 00:06:27.912 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:27.912 IdentitiesOnly yes 00:06:27.912 LogLevel FATAL 00:06:27.912 ForwardAgent yes 00:06:27.912 ForwardX11 yes 00:06:27.912 00:06:27.925 [Pipeline] withEnv 00:06:27.928 [Pipeline] { 00:06:27.942 [Pipeline] sh 00:06:28.221 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:28.221 source /etc/os-release 00:06:28.221 [[ -e /image.version ]] && img=$(< /image.version) 00:06:28.221 # Minimal, systemd-like check. 00:06:28.221 if [[ -e /.dockerenv ]]; then 00:06:28.221 # Clear garbage from the node's name: 00:06:28.221 # agt-er_autotest_547-896 -> autotest_547-896 00:06:28.221 # $HOSTNAME is the actual container id 00:06:28.221 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:28.221 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:28.221 # We can assume this is a mount from a host where container is running, 00:06:28.221 # so fetch its hostname to easily identify the target swarm worker. 00:06:28.221 container="$(< /etc/hostname) ($agent)" 00:06:28.221 else 00:06:28.221 # Fallback 00:06:28.221 container=$agent 00:06:28.221 fi 00:06:28.221 fi 00:06:28.221 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:28.221 00:06:28.490 [Pipeline] } 00:06:28.504 [Pipeline] // withEnv 00:06:28.512 [Pipeline] setCustomBuildProperty 00:06:28.527 [Pipeline] stage 00:06:28.529 [Pipeline] { (Tests) 00:06:28.546 [Pipeline] sh 00:06:28.826 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:29.100 [Pipeline] sh 00:06:29.445 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:29.719 [Pipeline] timeout 00:06:29.719 Timeout set to expire in 1 hr 30 min 00:06:29.721 [Pipeline] { 00:06:29.735 [Pipeline] sh 00:06:30.015 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:30.582 HEAD is now at aa3f30c36 nvme/perf: interrupt mode support for pcie controller 00:06:30.593 [Pipeline] sh 00:06:30.872 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:31.143 [Pipeline] sh 00:06:31.422 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:31.750 [Pipeline] sh 00:06:32.029 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:06:32.287 ++ readlink -f spdk_repo 00:06:32.287 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:32.287 + [[ -n /home/vagrant/spdk_repo ]] 00:06:32.287 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:32.287 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:32.287 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:32.287 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:32.287 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:32.287 + [[ raid-vg-autotest == pkgdep-* ]] 00:06:32.287 + cd /home/vagrant/spdk_repo 00:06:32.287 + source /etc/os-release 00:06:32.287 ++ NAME='Fedora Linux' 00:06:32.287 ++ VERSION='39 (Cloud Edition)' 00:06:32.287 ++ ID=fedora 00:06:32.287 ++ VERSION_ID=39 00:06:32.287 ++ VERSION_CODENAME= 00:06:32.287 ++ PLATFORM_ID=platform:f39 00:06:32.287 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:32.287 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:32.287 ++ LOGO=fedora-logo-icon 00:06:32.287 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:32.287 ++ HOME_URL=https://fedoraproject.org/ 00:06:32.287 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:32.287 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:32.287 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:32.287 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:32.287 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:32.287 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:32.287 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:32.287 ++ SUPPORT_END=2024-11-12 00:06:32.287 ++ VARIANT='Cloud Edition' 00:06:32.287 ++ VARIANT_ID=cloud 00:06:32.287 + uname -a 00:06:32.287 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:32.287 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:32.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:32.546 Hugepages 00:06:32.546 node hugesize free / total 00:06:32.546 node0 1048576kB 0 / 0 00:06:32.804 node0 2048kB 0 / 0 00:06:32.804 00:06:32.804 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:32.804 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:32.804 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:32.805 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:32.805 + rm -f /tmp/spdk-ld-path 00:06:32.805 + source autorun-spdk.conf 00:06:32.805 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:32.805 ++ SPDK_RUN_ASAN=1 00:06:32.805 ++ SPDK_RUN_UBSAN=1 00:06:32.805 ++ SPDK_TEST_RAID=1 00:06:32.805 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:32.805 ++ RUN_NIGHTLY=0 00:06:32.805 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:32.805 + [[ -n '' ]] 00:06:32.805 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:32.805 + for M in /var/spdk/build-*-manifest.txt 00:06:32.805 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:32.805 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:32.805 + for M in /var/spdk/build-*-manifest.txt 00:06:32.805 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:32.805 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:32.805 + for M in /var/spdk/build-*-manifest.txt 00:06:32.805 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:32.805 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:32.805 ++ uname 00:06:32.805 + [[ Linux == \L\i\n\u\x ]] 00:06:32.805 + sudo dmesg -T 00:06:32.805 + sudo dmesg --clear 00:06:32.805 + dmesg_pid=5203 00:06:32.805 + [[ Fedora Linux == FreeBSD ]] 00:06:32.805 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:32.805 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:32.805 + sudo dmesg -Tw 00:06:32.805 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:32.805 + [[ -x /usr/src/fio-static/fio ]] 00:06:32.805 + export FIO_BIN=/usr/src/fio-static/fio 00:06:32.805 + FIO_BIN=/usr/src/fio-static/fio 00:06:32.805 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:32.805 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:32.805 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:32.805 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:32.805 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:32.805 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:32.805 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:32.805 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:32.805 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:32.805 Test configuration: 00:06:32.805 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:32.805 SPDK_RUN_ASAN=1 00:06:32.805 SPDK_RUN_UBSAN=1 00:06:32.805 SPDK_TEST_RAID=1 00:06:32.805 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:33.064 RUN_NIGHTLY=0 09:06:16 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:06:33.064 09:06:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:33.064 09:06:16 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:33.064 09:06:16 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:33.064 09:06:16 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.064 09:06:16 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.064 09:06:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.064 09:06:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.064 09:06:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.064 09:06:16 -- paths/export.sh@5 -- $ export PATH 00:06:33.064 09:06:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.064 09:06:16 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:33.064 09:06:16 -- common/autobuild_common.sh@486 -- $ date +%s 00:06:33.064 09:06:16 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728983176.XXXXXX 00:06:33.064 09:06:16 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728983176.ltfZaR 00:06:33.064 09:06:16 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:06:33.064 09:06:16 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:06:33.064 09:06:16 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:33.064 09:06:16 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:33.064 09:06:16 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:33.064 09:06:16 -- common/autobuild_common.sh@502 -- $ get_config_params 00:06:33.064 09:06:16 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:06:33.064 09:06:16 -- common/autotest_common.sh@10 -- $ set +x 00:06:33.064 09:06:16 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:06:33.064 09:06:16 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:06:33.064 09:06:16 -- pm/common@17 -- $ local monitor 00:06:33.064 09:06:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:33.064 09:06:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:33.064 09:06:16 -- pm/common@25 -- $ sleep 1 00:06:33.064 09:06:16 -- pm/common@21 -- $ date +%s 00:06:33.064 09:06:16 -- pm/common@21 -- $ date +%s 00:06:33.064 09:06:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728983176 00:06:33.064 09:06:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728983176 00:06:33.064 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728983176_collect-cpu-load.pm.log 00:06:33.064 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728983176_collect-vmstat.pm.log 00:06:33.998 09:06:17 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:06:33.998 09:06:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:33.998 09:06:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:33.998 09:06:17 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:33.998 09:06:17 -- spdk/autobuild.sh@16 -- $ date -u 00:06:33.998 Tue Oct 15 09:06:17 AM UTC 2024 00:06:33.998 09:06:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:33.998 v25.01-pre-78-gaa3f30c36 00:06:33.998 09:06:17 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:06:33.998 09:06:17 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:06:33.998 09:06:17 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:06:33.998 09:06:17 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:06:33.998 09:06:17 -- common/autotest_common.sh@10 -- $ set +x 00:06:33.998 ************************************ 00:06:33.998 START TEST asan 00:06:33.998 ************************************ 00:06:33.998 using asan 00:06:33.998 09:06:17 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:06:33.998 00:06:33.998 real 0m0.000s 00:06:33.998 user 0m0.000s 00:06:33.998 sys 0m0.000s 00:06:33.998 09:06:17 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:33.998 09:06:17 asan -- common/autotest_common.sh@10 -- $ set +x 00:06:33.998 ************************************ 00:06:33.998 END TEST asan 00:06:33.998 ************************************ 00:06:33.998 09:06:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:33.998 09:06:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:33.998 09:06:17 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:06:33.998 09:06:17 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:06:33.998 09:06:17 -- common/autotest_common.sh@10 -- $ set +x 00:06:33.998 ************************************ 00:06:33.998 START TEST ubsan 00:06:33.998 ************************************ 00:06:33.998 using ubsan 00:06:33.998 09:06:17 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:06:33.999 00:06:33.999 real 0m0.000s 00:06:33.999 user 0m0.000s 00:06:33.999 sys 0m0.000s 00:06:33.999 09:06:17 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:33.999 09:06:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:33.999 ************************************ 00:06:33.999 END TEST ubsan 00:06:33.999 ************************************ 00:06:33.999 09:06:17 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:33.999 09:06:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:33.999 09:06:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:33.999 09:06:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:33.999 09:06:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:33.999 09:06:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:33.999 09:06:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:33.999 09:06:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:33.999 09:06:17 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:06:34.256 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:34.256 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:34.821 Using 'verbs' RDMA provider 00:06:50.661 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:07:02.865 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:07:02.865 Creating mk/config.mk...done. 00:07:02.865 Creating mk/cc.flags.mk...done. 00:07:02.865 Type 'make' to build. 00:07:02.865 09:06:46 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:07:02.865 09:06:46 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:07:02.865 09:06:46 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:07:02.865 09:06:46 -- common/autotest_common.sh@10 -- $ set +x 00:07:02.865 ************************************ 00:07:02.865 START TEST make 00:07:02.865 ************************************ 00:07:02.865 09:06:46 make -- common/autotest_common.sh@1125 -- $ make -j10 00:07:02.865 make[1]: Nothing to be done for 'all'. 00:07:17.743 The Meson build system 00:07:17.743 Version: 1.5.0 00:07:17.743 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:07:17.743 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:07:17.743 Build type: native build 00:07:17.743 Program cat found: YES (/usr/bin/cat) 00:07:17.743 Project name: DPDK 00:07:17.743 Project version: 24.03.0 00:07:17.743 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:17.743 C linker for the host machine: cc ld.bfd 2.40-14 00:07:17.743 Host machine cpu family: x86_64 00:07:17.743 Host machine cpu: x86_64 00:07:17.743 Message: ## Building in Developer Mode ## 00:07:17.743 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:17.743 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:07:17.743 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:17.743 Program python3 found: YES (/usr/bin/python3) 00:07:17.743 Program cat found: YES (/usr/bin/cat) 00:07:17.743 Compiler for C supports arguments -march=native: YES 00:07:17.743 Checking for size of "void *" : 8 00:07:17.743 Checking for size of "void *" : 8 (cached) 00:07:17.743 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:17.743 Library m found: YES 00:07:17.743 Library numa found: YES 00:07:17.743 Has header "numaif.h" : YES 00:07:17.743 Library fdt found: NO 00:07:17.743 Library execinfo found: NO 00:07:17.743 Has header "execinfo.h" : YES 00:07:17.743 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:17.743 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:17.743 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:17.743 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:17.743 Run-time dependency openssl found: YES 3.1.1 00:07:17.743 Run-time dependency libpcap found: YES 1.10.4 00:07:17.743 Has header "pcap.h" with dependency libpcap: YES 00:07:17.743 Compiler for C supports arguments -Wcast-qual: YES 00:07:17.743 Compiler for C supports arguments -Wdeprecated: YES 00:07:17.743 Compiler for C supports arguments -Wformat: YES 00:07:17.743 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:17.743 Compiler for C supports arguments -Wformat-security: NO 00:07:17.743 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:17.743 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:17.743 Compiler for C supports arguments -Wnested-externs: YES 00:07:17.743 Compiler for C supports arguments -Wold-style-definition: YES 00:07:17.743 Compiler for C supports arguments -Wpointer-arith: YES 00:07:17.743 Compiler for C supports arguments -Wsign-compare: YES 00:07:17.743 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:17.743 Compiler for C supports arguments -Wundef: YES 00:07:17.743 Compiler for C supports arguments -Wwrite-strings: YES 00:07:17.743 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:17.743 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:17.743 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:17.743 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:17.743 Program objdump found: YES (/usr/bin/objdump) 00:07:17.743 Compiler for C supports arguments -mavx512f: YES 00:07:17.743 Checking if "AVX512 checking" compiles: YES 00:07:17.743 Fetching value of define "__SSE4_2__" : 1 00:07:17.743 Fetching value of define "__AES__" : 1 00:07:17.743 Fetching value of define "__AVX__" : 1 00:07:17.743 Fetching value of define "__AVX2__" : 1 00:07:17.743 Fetching value of define "__AVX512BW__" : (undefined) 00:07:17.743 Fetching value of define "__AVX512CD__" : (undefined) 00:07:17.743 Fetching value of define "__AVX512DQ__" : (undefined) 00:07:17.743 Fetching value of define "__AVX512F__" : (undefined) 00:07:17.743 Fetching value of define "__AVX512VL__" : (undefined) 00:07:17.743 Fetching value of define "__PCLMUL__" : 1 00:07:17.743 Fetching value of define "__RDRND__" : 1 00:07:17.743 Fetching value of define "__RDSEED__" : 1 00:07:17.743 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:17.743 Fetching value of define "__znver1__" : (undefined) 00:07:17.743 Fetching value of define "__znver2__" : (undefined) 00:07:17.743 Fetching value of define "__znver3__" : (undefined) 00:07:17.743 Fetching value of define "__znver4__" : (undefined) 00:07:17.743 Library asan found: YES 00:07:17.743 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:17.743 Message: lib/log: Defining dependency "log" 00:07:17.743 Message: lib/kvargs: Defining dependency "kvargs" 00:07:17.743 Message: lib/telemetry: Defining dependency "telemetry" 00:07:17.743 Library rt found: YES 00:07:17.743 Checking for function "getentropy" : NO 00:07:17.743 Message: lib/eal: Defining dependency "eal" 00:07:17.743 Message: lib/ring: Defining dependency "ring" 00:07:17.743 Message: lib/rcu: Defining dependency "rcu" 00:07:17.743 Message: lib/mempool: Defining dependency "mempool" 00:07:17.743 Message: lib/mbuf: Defining dependency "mbuf" 00:07:17.743 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:17.743 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:07:17.743 Compiler for C supports arguments -mpclmul: YES 00:07:17.743 Compiler for C supports arguments -maes: YES 00:07:17.743 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:17.743 Compiler for C supports arguments -mavx512bw: YES 00:07:17.743 Compiler for C supports arguments -mavx512dq: YES 00:07:17.743 Compiler for C supports arguments -mavx512vl: YES 00:07:17.743 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:17.743 Compiler for C supports arguments -mavx2: YES 00:07:17.743 Compiler for C supports arguments -mavx: YES 00:07:17.743 Message: lib/net: Defining dependency "net" 00:07:17.743 Message: lib/meter: Defining dependency "meter" 00:07:17.743 Message: lib/ethdev: Defining dependency "ethdev" 00:07:17.743 Message: lib/pci: Defining dependency "pci" 00:07:17.743 Message: lib/cmdline: Defining dependency "cmdline" 00:07:17.743 Message: lib/hash: Defining dependency "hash" 00:07:17.743 Message: lib/timer: Defining dependency "timer" 00:07:17.743 Message: lib/compressdev: Defining dependency "compressdev" 00:07:17.743 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:17.743 Message: lib/dmadev: Defining dependency "dmadev" 00:07:17.743 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:17.743 Message: lib/power: Defining dependency "power" 00:07:17.743 Message: lib/reorder: Defining dependency "reorder" 00:07:17.743 Message: lib/security: Defining dependency "security" 00:07:17.744 Has header "linux/userfaultfd.h" : YES 00:07:17.744 Has header "linux/vduse.h" : YES 00:07:17.744 Message: lib/vhost: Defining dependency "vhost" 00:07:17.744 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:17.744 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:17.744 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:17.744 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:17.744 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:17.744 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:17.744 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:17.744 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:17.744 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:17.744 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:17.744 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:17.744 Configuring doxy-api-html.conf using configuration 00:07:17.744 Configuring doxy-api-man.conf using configuration 00:07:17.744 Program mandb found: YES (/usr/bin/mandb) 00:07:17.744 Program sphinx-build found: NO 00:07:17.744 Configuring rte_build_config.h using configuration 00:07:17.744 Message: 00:07:17.744 ================= 00:07:17.744 Applications Enabled 00:07:17.744 ================= 00:07:17.744 00:07:17.744 apps: 00:07:17.744 00:07:17.744 00:07:17.744 Message: 00:07:17.744 ================= 00:07:17.744 Libraries Enabled 00:07:17.744 ================= 00:07:17.744 00:07:17.744 libs: 00:07:17.744 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:17.744 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:17.744 cryptodev, dmadev, power, reorder, security, vhost, 00:07:17.744 00:07:17.744 Message: 00:07:17.744 =============== 00:07:17.744 Drivers Enabled 00:07:17.744 =============== 00:07:17.744 00:07:17.744 common: 00:07:17.744 00:07:17.744 bus: 00:07:17.744 pci, vdev, 00:07:17.744 mempool: 00:07:17.744 ring, 00:07:17.744 dma: 00:07:17.744 00:07:17.744 net: 00:07:17.744 00:07:17.744 crypto: 00:07:17.744 00:07:17.744 compress: 00:07:17.744 00:07:17.744 vdpa: 00:07:17.744 00:07:17.744 00:07:17.744 Message: 00:07:17.744 ================= 00:07:17.744 Content Skipped 00:07:17.744 ================= 00:07:17.744 00:07:17.744 apps: 00:07:17.744 dumpcap: explicitly disabled via build config 00:07:17.744 graph: explicitly disabled via build config 00:07:17.744 pdump: explicitly disabled via build config 00:07:17.744 proc-info: explicitly disabled via build config 00:07:17.744 test-acl: explicitly disabled via build config 00:07:17.744 test-bbdev: explicitly disabled via build config 00:07:17.744 test-cmdline: explicitly disabled via build config 00:07:17.744 test-compress-perf: explicitly disabled via build config 00:07:17.744 test-crypto-perf: explicitly disabled via build config 00:07:17.744 test-dma-perf: explicitly disabled via build config 00:07:17.744 test-eventdev: explicitly disabled via build config 00:07:17.744 test-fib: explicitly disabled via build config 00:07:17.744 test-flow-perf: explicitly disabled via build config 00:07:17.744 test-gpudev: explicitly disabled via build config 00:07:17.744 test-mldev: explicitly disabled via build config 00:07:17.744 test-pipeline: explicitly disabled via build config 00:07:17.744 test-pmd: explicitly disabled via build config 00:07:17.744 test-regex: explicitly disabled via build config 00:07:17.744 test-sad: explicitly disabled via build config 00:07:17.744 test-security-perf: explicitly disabled via build config 00:07:17.744 00:07:17.744 libs: 00:07:17.744 argparse: explicitly disabled via build config 00:07:17.744 metrics: explicitly disabled via build config 00:07:17.744 acl: explicitly disabled via build config 00:07:17.744 bbdev: explicitly disabled via build config 00:07:17.744 bitratestats: explicitly disabled via build config 00:07:17.744 bpf: explicitly disabled via build config 00:07:17.744 cfgfile: explicitly disabled via build config 00:07:17.744 distributor: explicitly disabled via build config 00:07:17.744 efd: explicitly disabled via build config 00:07:17.744 eventdev: explicitly disabled via build config 00:07:17.744 dispatcher: explicitly disabled via build config 00:07:17.744 gpudev: explicitly disabled via build config 00:07:17.744 gro: explicitly disabled via build config 00:07:17.744 gso: explicitly disabled via build config 00:07:17.744 ip_frag: explicitly disabled via build config 00:07:17.744 jobstats: explicitly disabled via build config 00:07:17.744 latencystats: explicitly disabled via build config 00:07:17.744 lpm: explicitly disabled via build config 00:07:17.744 member: explicitly disabled via build config 00:07:17.744 pcapng: explicitly disabled via build config 00:07:17.744 rawdev: explicitly disabled via build config 00:07:17.744 regexdev: explicitly disabled via build config 00:07:17.744 mldev: explicitly disabled via build config 00:07:17.744 rib: explicitly disabled via build config 00:07:17.744 sched: explicitly disabled via build config 00:07:17.744 stack: explicitly disabled via build config 00:07:17.744 ipsec: explicitly disabled via build config 00:07:17.744 pdcp: explicitly disabled via build config 00:07:17.744 fib: explicitly disabled via build config 00:07:17.744 port: explicitly disabled via build config 00:07:17.744 pdump: explicitly disabled via build config 00:07:17.744 table: explicitly disabled via build config 00:07:17.744 pipeline: explicitly disabled via build config 00:07:17.744 graph: explicitly disabled via build config 00:07:17.744 node: explicitly disabled via build config 00:07:17.744 00:07:17.744 drivers: 00:07:17.744 common/cpt: not in enabled drivers build config 00:07:17.744 common/dpaax: not in enabled drivers build config 00:07:17.744 common/iavf: not in enabled drivers build config 00:07:17.744 common/idpf: not in enabled drivers build config 00:07:17.744 common/ionic: not in enabled drivers build config 00:07:17.744 common/mvep: not in enabled drivers build config 00:07:17.744 common/octeontx: not in enabled drivers build config 00:07:17.744 bus/auxiliary: not in enabled drivers build config 00:07:17.744 bus/cdx: not in enabled drivers build config 00:07:17.744 bus/dpaa: not in enabled drivers build config 00:07:17.744 bus/fslmc: not in enabled drivers build config 00:07:17.744 bus/ifpga: not in enabled drivers build config 00:07:17.744 bus/platform: not in enabled drivers build config 00:07:17.744 bus/uacce: not in enabled drivers build config 00:07:17.744 bus/vmbus: not in enabled drivers build config 00:07:17.744 common/cnxk: not in enabled drivers build config 00:07:17.744 common/mlx5: not in enabled drivers build config 00:07:17.744 common/nfp: not in enabled drivers build config 00:07:17.744 common/nitrox: not in enabled drivers build config 00:07:17.744 common/qat: not in enabled drivers build config 00:07:17.744 common/sfc_efx: not in enabled drivers build config 00:07:17.744 mempool/bucket: not in enabled drivers build config 00:07:17.744 mempool/cnxk: not in enabled drivers build config 00:07:17.744 mempool/dpaa: not in enabled drivers build config 00:07:17.744 mempool/dpaa2: not in enabled drivers build config 00:07:17.744 mempool/octeontx: not in enabled drivers build config 00:07:17.744 mempool/stack: not in enabled drivers build config 00:07:17.744 dma/cnxk: not in enabled drivers build config 00:07:17.744 dma/dpaa: not in enabled drivers build config 00:07:17.744 dma/dpaa2: not in enabled drivers build config 00:07:17.744 dma/hisilicon: not in enabled drivers build config 00:07:17.744 dma/idxd: not in enabled drivers build config 00:07:17.744 dma/ioat: not in enabled drivers build config 00:07:17.744 dma/skeleton: not in enabled drivers build config 00:07:17.744 net/af_packet: not in enabled drivers build config 00:07:17.744 net/af_xdp: not in enabled drivers build config 00:07:17.744 net/ark: not in enabled drivers build config 00:07:17.744 net/atlantic: not in enabled drivers build config 00:07:17.744 net/avp: not in enabled drivers build config 00:07:17.744 net/axgbe: not in enabled drivers build config 00:07:17.744 net/bnx2x: not in enabled drivers build config 00:07:17.744 net/bnxt: not in enabled drivers build config 00:07:17.744 net/bonding: not in enabled drivers build config 00:07:17.744 net/cnxk: not in enabled drivers build config 00:07:17.744 net/cpfl: not in enabled drivers build config 00:07:17.744 net/cxgbe: not in enabled drivers build config 00:07:17.744 net/dpaa: not in enabled drivers build config 00:07:17.744 net/dpaa2: not in enabled drivers build config 00:07:17.744 net/e1000: not in enabled drivers build config 00:07:17.744 net/ena: not in enabled drivers build config 00:07:17.744 net/enetc: not in enabled drivers build config 00:07:17.744 net/enetfec: not in enabled drivers build config 00:07:17.744 net/enic: not in enabled drivers build config 00:07:17.744 net/failsafe: not in enabled drivers build config 00:07:17.744 net/fm10k: not in enabled drivers build config 00:07:17.744 net/gve: not in enabled drivers build config 00:07:17.744 net/hinic: not in enabled drivers build config 00:07:17.744 net/hns3: not in enabled drivers build config 00:07:17.744 net/i40e: not in enabled drivers build config 00:07:17.744 net/iavf: not in enabled drivers build config 00:07:17.744 net/ice: not in enabled drivers build config 00:07:17.744 net/idpf: not in enabled drivers build config 00:07:17.744 net/igc: not in enabled drivers build config 00:07:17.744 net/ionic: not in enabled drivers build config 00:07:17.744 net/ipn3ke: not in enabled drivers build config 00:07:17.744 net/ixgbe: not in enabled drivers build config 00:07:17.744 net/mana: not in enabled drivers build config 00:07:17.744 net/memif: not in enabled drivers build config 00:07:17.744 net/mlx4: not in enabled drivers build config 00:07:17.744 net/mlx5: not in enabled drivers build config 00:07:17.744 net/mvneta: not in enabled drivers build config 00:07:17.744 net/mvpp2: not in enabled drivers build config 00:07:17.744 net/netvsc: not in enabled drivers build config 00:07:17.744 net/nfb: not in enabled drivers build config 00:07:17.744 net/nfp: not in enabled drivers build config 00:07:17.744 net/ngbe: not in enabled drivers build config 00:07:17.744 net/null: not in enabled drivers build config 00:07:17.744 net/octeontx: not in enabled drivers build config 00:07:17.744 net/octeon_ep: not in enabled drivers build config 00:07:17.744 net/pcap: not in enabled drivers build config 00:07:17.744 net/pfe: not in enabled drivers build config 00:07:17.744 net/qede: not in enabled drivers build config 00:07:17.744 net/ring: not in enabled drivers build config 00:07:17.744 net/sfc: not in enabled drivers build config 00:07:17.744 net/softnic: not in enabled drivers build config 00:07:17.744 net/tap: not in enabled drivers build config 00:07:17.744 net/thunderx: not in enabled drivers build config 00:07:17.744 net/txgbe: not in enabled drivers build config 00:07:17.744 net/vdev_netvsc: not in enabled drivers build config 00:07:17.744 net/vhost: not in enabled drivers build config 00:07:17.744 net/virtio: not in enabled drivers build config 00:07:17.744 net/vmxnet3: not in enabled drivers build config 00:07:17.744 raw/*: missing internal dependency, "rawdev" 00:07:17.744 crypto/armv8: not in enabled drivers build config 00:07:17.744 crypto/bcmfs: not in enabled drivers build config 00:07:17.744 crypto/caam_jr: not in enabled drivers build config 00:07:17.744 crypto/ccp: not in enabled drivers build config 00:07:17.744 crypto/cnxk: not in enabled drivers build config 00:07:17.744 crypto/dpaa_sec: not in enabled drivers build config 00:07:17.744 crypto/dpaa2_sec: not in enabled drivers build config 00:07:17.744 crypto/ipsec_mb: not in enabled drivers build config 00:07:17.744 crypto/mlx5: not in enabled drivers build config 00:07:17.744 crypto/mvsam: not in enabled drivers build config 00:07:17.744 crypto/nitrox: not in enabled drivers build config 00:07:17.744 crypto/null: not in enabled drivers build config 00:07:17.744 crypto/octeontx: not in enabled drivers build config 00:07:17.744 crypto/openssl: not in enabled drivers build config 00:07:17.744 crypto/scheduler: not in enabled drivers build config 00:07:17.744 crypto/uadk: not in enabled drivers build config 00:07:17.744 crypto/virtio: not in enabled drivers build config 00:07:17.744 compress/isal: not in enabled drivers build config 00:07:17.744 compress/mlx5: not in enabled drivers build config 00:07:17.744 compress/nitrox: not in enabled drivers build config 00:07:17.744 compress/octeontx: not in enabled drivers build config 00:07:17.744 compress/zlib: not in enabled drivers build config 00:07:17.744 regex/*: missing internal dependency, "regexdev" 00:07:17.744 ml/*: missing internal dependency, "mldev" 00:07:17.744 vdpa/ifc: not in enabled drivers build config 00:07:17.744 vdpa/mlx5: not in enabled drivers build config 00:07:17.744 vdpa/nfp: not in enabled drivers build config 00:07:17.744 vdpa/sfc: not in enabled drivers build config 00:07:17.744 event/*: missing internal dependency, "eventdev" 00:07:17.744 baseband/*: missing internal dependency, "bbdev" 00:07:17.744 gpu/*: missing internal dependency, "gpudev" 00:07:17.744 00:07:17.744 00:07:17.744 Build targets in project: 85 00:07:17.744 00:07:17.744 DPDK 24.03.0 00:07:17.744 00:07:17.744 User defined options 00:07:17.744 buildtype : debug 00:07:17.744 default_library : shared 00:07:17.744 libdir : lib 00:07:17.744 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:17.745 b_sanitize : address 00:07:17.745 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:17.745 c_link_args : 00:07:17.745 cpu_instruction_set: native 00:07:17.745 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:07:17.745 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:07:17.745 enable_docs : false 00:07:17.745 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:07:17.745 enable_kmods : false 00:07:17.745 max_lcores : 128 00:07:17.745 tests : false 00:07:17.745 00:07:17.745 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:17.745 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:07:17.745 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:17.745 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:17.745 [3/268] Linking static target lib/librte_kvargs.a 00:07:17.745 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:17.745 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:17.745 [6/268] Linking static target lib/librte_log.a 00:07:17.745 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.745 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:17.745 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:17.745 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:17.745 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:17.745 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:18.001 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:18.001 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:18.001 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:18.001 [16/268] Linking static target lib/librte_telemetry.a 00:07:18.001 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:18.260 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:18.260 [19/268] Linking target lib/librte_log.so.24.1 00:07:18.260 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:18.516 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:18.774 [22/268] Linking target lib/librte_kvargs.so.24.1 00:07:18.774 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:18.774 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:18.774 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:18.774 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:18.774 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:19.031 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:19.031 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:19.031 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:19.290 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:19.290 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:19.290 [33/268] Linking target lib/librte_telemetry.so.24.1 00:07:19.547 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:19.805 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:19.805 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:19.805 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:19.805 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:20.061 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:20.061 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:20.061 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:20.061 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:20.061 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:20.061 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:20.318 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:20.318 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:20.318 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:20.576 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:20.834 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:20.834 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:21.091 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:21.091 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:21.091 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:21.347 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:21.347 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:21.347 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:21.347 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:21.605 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:21.605 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:21.605 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:21.862 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:22.120 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:22.120 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:22.120 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:22.377 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:22.377 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:22.377 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:22.377 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:22.635 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:22.896 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:22.896 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:22.896 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:22.896 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:22.896 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:22.896 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:22.896 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:23.154 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:23.154 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:23.412 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:23.412 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:23.709 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:23.709 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:23.709 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:24.034 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:24.034 [85/268] Linking static target lib/librte_ring.a 00:07:24.034 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:24.034 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:24.034 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:24.034 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:24.291 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:24.291 [91/268] Linking static target lib/librte_eal.a 00:07:24.291 [92/268] Linking static target lib/librte_mempool.a 00:07:24.549 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:24.806 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:24.806 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:24.806 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:24.806 [97/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:24.806 [98/268] Linking static target lib/librte_rcu.a 00:07:25.063 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:25.063 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:25.063 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:25.321 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:25.321 [103/268] Linking static target lib/librte_mbuf.a 00:07:25.321 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:25.321 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:25.579 [106/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.579 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:25.579 [108/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.837 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:25.837 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:25.837 [111/268] Linking static target lib/librte_net.a 00:07:25.837 [112/268] Linking static target lib/librte_meter.a 00:07:26.095 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:26.095 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:26.095 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:26.353 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:26.353 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:26.353 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:26.612 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:26.871 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:27.185 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:27.444 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:27.444 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:27.444 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:27.444 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:27.444 [126/268] Linking static target lib/librte_pci.a 00:07:28.011 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:28.011 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:28.011 [129/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:28.011 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:28.011 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:28.011 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:28.269 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:28.269 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:28.269 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:28.269 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:28.269 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:28.269 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:28.269 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:28.528 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:28.528 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:28.528 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:28.528 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:28.528 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:28.787 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:28.787 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:28.787 [147/268] Linking static target lib/librte_cmdline.a 00:07:29.045 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:29.303 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:29.303 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:29.303 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:29.561 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:29.561 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:29.818 [154/268] Linking static target lib/librte_timer.a 00:07:29.818 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:30.075 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:30.334 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:30.334 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:30.334 [159/268] Linking static target lib/librte_compressdev.a 00:07:30.593 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:30.593 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:30.593 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:30.593 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:30.593 [164/268] Linking static target lib/librte_ethdev.a 00:07:30.593 [165/268] Linking static target lib/librte_hash.a 00:07:30.593 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:30.593 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:30.851 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:30.851 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:30.851 [170/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:30.851 [171/268] Linking static target lib/librte_dmadev.a 00:07:31.420 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:31.420 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:31.420 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:31.678 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:31.678 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:31.678 [177/268] Linking static target lib/librte_cryptodev.a 00:07:31.678 [178/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:31.935 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:31.935 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:32.194 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:32.194 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:32.194 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:32.194 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:32.452 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:32.452 [186/268] Linking static target lib/librte_power.a 00:07:33.017 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:33.017 [188/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:33.017 [189/268] Linking static target lib/librte_security.a 00:07:33.017 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:33.017 [191/268] Linking static target lib/librte_reorder.a 00:07:33.017 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:33.275 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:33.532 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:33.790 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:33.790 [196/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:34.048 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:34.306 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:34.306 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:34.306 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:34.564 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:34.823 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:35.087 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:35.087 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:35.087 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:35.344 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:35.601 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:35.602 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:35.602 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:35.860 [210/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:35.860 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:35.860 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:35.860 [213/268] Linking static target drivers/librte_bus_pci.a 00:07:35.860 [214/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:36.118 [215/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:36.118 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:36.376 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:36.376 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:36.376 [219/268] Linking static target drivers/librte_bus_vdev.a 00:07:36.376 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:36.376 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:36.634 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:36.634 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:36.892 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:36.892 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:36.892 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:36.892 [227/268] Linking static target drivers/librte_mempool_ring.a 00:07:38.266 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:38.266 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:38.266 [230/268] Linking target lib/librte_eal.so.24.1 00:07:38.266 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:38.523 [232/268] Linking target lib/librte_meter.so.24.1 00:07:38.523 [233/268] Linking target lib/librte_dmadev.so.24.1 00:07:38.523 [234/268] Linking target lib/librte_ring.so.24.1 00:07:38.523 [235/268] Linking target lib/librte_pci.so.24.1 00:07:38.523 [236/268] Linking target lib/librte_timer.so.24.1 00:07:38.523 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:38.523 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:38.523 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:38.523 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:38.523 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:38.788 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:38.788 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:38.788 [244/268] Linking target lib/librte_mempool.so.24.1 00:07:38.788 [245/268] Linking target lib/librte_rcu.so.24.1 00:07:39.046 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:39.046 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:39.046 [248/268] Linking target lib/librte_mbuf.so.24.1 00:07:39.046 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:39.046 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:39.304 [251/268] Linking target lib/librte_reorder.so.24.1 00:07:39.304 [252/268] Linking target lib/librte_compressdev.so.24.1 00:07:39.304 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:07:39.304 [254/268] Linking target lib/librte_net.so.24.1 00:07:39.304 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:39.304 [256/268] Linking target lib/librte_hash.so.24.1 00:07:39.562 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:39.562 [258/268] Linking target lib/librte_cmdline.so.24.1 00:07:39.562 [259/268] Linking target lib/librte_security.so.24.1 00:07:39.562 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:39.821 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:39.821 [262/268] Linking target lib/librte_ethdev.so.24.1 00:07:40.079 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:40.079 [264/268] Linking target lib/librte_power.so.24.1 00:07:44.263 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:44.263 [266/268] Linking static target lib/librte_vhost.a 00:07:45.635 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:45.635 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:45.635 INFO: autodetecting backend as ninja 00:07:45.635 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:08:12.174 CC lib/ut/ut.o 00:08:12.174 CC lib/log/log.o 00:08:12.174 CC lib/ut_mock/mock.o 00:08:12.174 CC lib/log/log_deprecated.o 00:08:12.174 CC lib/log/log_flags.o 00:08:12.174 LIB libspdk_ut.a 00:08:12.174 LIB libspdk_ut_mock.a 00:08:12.174 SO libspdk_ut_mock.so.6.0 00:08:12.174 SO libspdk_ut.so.2.0 00:08:12.174 LIB libspdk_log.a 00:08:12.174 SO libspdk_log.so.7.1 00:08:12.174 SYMLINK libspdk_ut_mock.so 00:08:12.174 SYMLINK libspdk_ut.so 00:08:12.174 SYMLINK libspdk_log.so 00:08:12.174 CC lib/util/base64.o 00:08:12.174 CC lib/util/bit_array.o 00:08:12.174 CXX lib/trace_parser/trace.o 00:08:12.174 CC lib/util/cpuset.o 00:08:12.174 CC lib/util/crc16.o 00:08:12.174 CC lib/util/crc32.o 00:08:12.174 CC lib/util/crc32c.o 00:08:12.174 CC lib/dma/dma.o 00:08:12.174 CC lib/ioat/ioat.o 00:08:12.174 CC lib/vfio_user/host/vfio_user_pci.o 00:08:12.174 CC lib/util/crc32_ieee.o 00:08:12.174 CC lib/util/crc64.o 00:08:12.174 CC lib/util/dif.o 00:08:12.174 CC lib/util/fd.o 00:08:12.174 LIB libspdk_dma.a 00:08:12.174 CC lib/util/fd_group.o 00:08:12.174 SO libspdk_dma.so.5.0 00:08:12.174 CC lib/vfio_user/host/vfio_user.o 00:08:12.174 CC lib/util/file.o 00:08:12.174 SYMLINK libspdk_dma.so 00:08:12.174 CC lib/util/hexlify.o 00:08:12.174 CC lib/util/iov.o 00:08:12.174 CC lib/util/math.o 00:08:12.174 LIB libspdk_ioat.a 00:08:12.174 CC lib/util/net.o 00:08:12.174 SO libspdk_ioat.so.7.0 00:08:12.174 CC lib/util/pipe.o 00:08:12.174 CC lib/util/strerror_tls.o 00:08:12.174 SYMLINK libspdk_ioat.so 00:08:12.174 CC lib/util/string.o 00:08:12.174 CC lib/util/uuid.o 00:08:12.174 LIB libspdk_vfio_user.a 00:08:12.174 CC lib/util/xor.o 00:08:12.174 SO libspdk_vfio_user.so.5.0 00:08:12.174 CC lib/util/zipf.o 00:08:12.174 CC lib/util/md5.o 00:08:12.174 SYMLINK libspdk_vfio_user.so 00:08:12.174 LIB libspdk_util.a 00:08:12.174 SO libspdk_util.so.10.1 00:08:12.174 SYMLINK libspdk_util.so 00:08:12.433 LIB libspdk_trace_parser.a 00:08:12.433 SO libspdk_trace_parser.so.6.0 00:08:12.433 CC lib/vmd/vmd.o 00:08:12.433 CC lib/vmd/led.o 00:08:12.433 CC lib/env_dpdk/env.o 00:08:12.433 CC lib/env_dpdk/memory.o 00:08:12.433 SYMLINK libspdk_trace_parser.so 00:08:12.433 CC lib/idxd/idxd.o 00:08:12.433 CC lib/rdma_utils/rdma_utils.o 00:08:12.433 CC lib/idxd/idxd_user.o 00:08:12.433 CC lib/conf/conf.o 00:08:12.433 CC lib/rdma_provider/common.o 00:08:12.433 CC lib/json/json_parse.o 00:08:12.691 CC lib/json/json_util.o 00:08:12.691 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:12.691 LIB libspdk_conf.a 00:08:12.691 SO libspdk_conf.so.6.0 00:08:12.948 CC lib/idxd/idxd_kernel.o 00:08:12.948 SYMLINK libspdk_conf.so 00:08:12.948 CC lib/env_dpdk/pci.o 00:08:12.948 CC lib/json/json_write.o 00:08:12.948 LIB libspdk_rdma_utils.a 00:08:12.948 SO libspdk_rdma_utils.so.1.0 00:08:12.948 LIB libspdk_rdma_provider.a 00:08:12.948 SO libspdk_rdma_provider.so.6.0 00:08:12.948 SYMLINK libspdk_rdma_utils.so 00:08:12.948 CC lib/env_dpdk/init.o 00:08:12.948 CC lib/env_dpdk/threads.o 00:08:12.948 SYMLINK libspdk_rdma_provider.so 00:08:12.948 CC lib/env_dpdk/pci_ioat.o 00:08:12.948 CC lib/env_dpdk/pci_virtio.o 00:08:13.206 CC lib/env_dpdk/pci_vmd.o 00:08:13.206 CC lib/env_dpdk/pci_idxd.o 00:08:13.206 CC lib/env_dpdk/pci_event.o 00:08:13.206 LIB libspdk_json.a 00:08:13.206 SO libspdk_json.so.6.0 00:08:13.463 LIB libspdk_idxd.a 00:08:13.463 CC lib/env_dpdk/sigbus_handler.o 00:08:13.463 CC lib/env_dpdk/pci_dpdk.o 00:08:13.463 LIB libspdk_vmd.a 00:08:13.463 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:13.463 SO libspdk_idxd.so.12.1 00:08:13.463 SO libspdk_vmd.so.6.0 00:08:13.463 SYMLINK libspdk_json.so 00:08:13.463 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:13.464 SYMLINK libspdk_vmd.so 00:08:13.464 SYMLINK libspdk_idxd.so 00:08:13.721 CC lib/jsonrpc/jsonrpc_server.o 00:08:13.721 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:13.721 CC lib/jsonrpc/jsonrpc_client.o 00:08:13.721 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:13.979 LIB libspdk_jsonrpc.a 00:08:14.237 SO libspdk_jsonrpc.so.6.0 00:08:14.237 SYMLINK libspdk_jsonrpc.so 00:08:14.505 CC lib/rpc/rpc.o 00:08:14.505 LIB libspdk_env_dpdk.a 00:08:14.764 SO libspdk_env_dpdk.so.15.1 00:08:14.764 LIB libspdk_rpc.a 00:08:14.764 SO libspdk_rpc.so.6.0 00:08:15.021 SYMLINK libspdk_rpc.so 00:08:15.022 SYMLINK libspdk_env_dpdk.so 00:08:15.279 CC lib/notify/notify_rpc.o 00:08:15.279 CC lib/notify/notify.o 00:08:15.279 CC lib/keyring/keyring.o 00:08:15.279 CC lib/keyring/keyring_rpc.o 00:08:15.279 CC lib/trace/trace.o 00:08:15.279 CC lib/trace/trace_flags.o 00:08:15.279 CC lib/trace/trace_rpc.o 00:08:15.537 LIB libspdk_notify.a 00:08:15.537 SO libspdk_notify.so.6.0 00:08:15.537 SYMLINK libspdk_notify.so 00:08:15.537 LIB libspdk_keyring.a 00:08:15.537 LIB libspdk_trace.a 00:08:15.537 SO libspdk_keyring.so.2.0 00:08:15.537 SO libspdk_trace.so.11.0 00:08:15.795 SYMLINK libspdk_keyring.so 00:08:15.795 SYMLINK libspdk_trace.so 00:08:16.053 CC lib/thread/thread.o 00:08:16.053 CC lib/thread/iobuf.o 00:08:16.053 CC lib/sock/sock.o 00:08:16.053 CC lib/sock/sock_rpc.o 00:08:16.618 LIB libspdk_sock.a 00:08:16.618 SO libspdk_sock.so.10.0 00:08:16.876 SYMLINK libspdk_sock.so 00:08:17.134 CC lib/nvme/nvme_ctrlr.o 00:08:17.134 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:17.134 CC lib/nvme/nvme_ns_cmd.o 00:08:17.134 CC lib/nvme/nvme_ns.o 00:08:17.134 CC lib/nvme/nvme_fabric.o 00:08:17.134 CC lib/nvme/nvme_pcie.o 00:08:17.134 CC lib/nvme/nvme.o 00:08:17.134 CC lib/nvme/nvme_pcie_common.o 00:08:17.134 CC lib/nvme/nvme_qpair.o 00:08:18.097 CC lib/nvme/nvme_quirks.o 00:08:18.097 CC lib/nvme/nvme_transport.o 00:08:18.097 CC lib/nvme/nvme_discovery.o 00:08:18.097 LIB libspdk_thread.a 00:08:18.097 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:18.097 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:18.097 SO libspdk_thread.so.10.2 00:08:18.097 CC lib/nvme/nvme_tcp.o 00:08:18.356 CC lib/nvme/nvme_opal.o 00:08:18.356 SYMLINK libspdk_thread.so 00:08:18.356 CC lib/nvme/nvme_io_msg.o 00:08:18.356 CC lib/nvme/nvme_poll_group.o 00:08:18.615 CC lib/nvme/nvme_zns.o 00:08:18.873 CC lib/nvme/nvme_stubs.o 00:08:18.873 CC lib/nvme/nvme_auth.o 00:08:18.873 CC lib/nvme/nvme_cuse.o 00:08:18.873 CC lib/nvme/nvme_rdma.o 00:08:19.131 CC lib/accel/accel.o 00:08:19.131 CC lib/blob/blobstore.o 00:08:19.390 CC lib/blob/request.o 00:08:19.957 CC lib/blob/zeroes.o 00:08:19.957 CC lib/init/json_config.o 00:08:19.957 CC lib/init/subsystem.o 00:08:20.215 CC lib/blob/blob_bs_dev.o 00:08:20.215 CC lib/accel/accel_rpc.o 00:08:20.215 CC lib/accel/accel_sw.o 00:08:20.215 CC lib/init/subsystem_rpc.o 00:08:20.215 CC lib/init/rpc.o 00:08:20.473 LIB libspdk_init.a 00:08:20.473 SO libspdk_init.so.6.0 00:08:20.730 CC lib/virtio/virtio.o 00:08:20.730 CC lib/virtio/virtio_vfio_user.o 00:08:20.730 CC lib/virtio/virtio_vhost_user.o 00:08:20.730 CC lib/virtio/virtio_pci.o 00:08:20.730 CC lib/fsdev/fsdev.o 00:08:20.730 CC lib/fsdev/fsdev_io.o 00:08:20.730 SYMLINK libspdk_init.so 00:08:20.730 LIB libspdk_accel.a 00:08:20.730 SO libspdk_accel.so.16.0 00:08:20.730 CC lib/event/app.o 00:08:20.988 SYMLINK libspdk_accel.so 00:08:20.988 CC lib/event/reactor.o 00:08:20.988 LIB libspdk_nvme.a 00:08:20.988 CC lib/fsdev/fsdev_rpc.o 00:08:20.988 CC lib/event/log_rpc.o 00:08:21.246 LIB libspdk_virtio.a 00:08:21.246 CC lib/event/app_rpc.o 00:08:21.246 CC lib/event/scheduler_static.o 00:08:21.246 SO libspdk_virtio.so.7.0 00:08:21.246 SO libspdk_nvme.so.15.0 00:08:21.246 CC lib/bdev/bdev.o 00:08:21.246 CC lib/bdev/bdev_rpc.o 00:08:21.246 SYMLINK libspdk_virtio.so 00:08:21.246 CC lib/bdev/bdev_zone.o 00:08:21.511 CC lib/bdev/part.o 00:08:21.511 CC lib/bdev/scsi_nvme.o 00:08:21.511 LIB libspdk_fsdev.a 00:08:21.511 SO libspdk_fsdev.so.1.0 00:08:21.769 LIB libspdk_event.a 00:08:21.769 SYMLINK libspdk_fsdev.so 00:08:21.769 SO libspdk_event.so.14.0 00:08:21.769 SYMLINK libspdk_nvme.so 00:08:21.769 SYMLINK libspdk_event.so 00:08:22.027 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:08:22.957 LIB libspdk_fuse_dispatcher.a 00:08:23.214 SO libspdk_fuse_dispatcher.so.1.0 00:08:23.214 SYMLINK libspdk_fuse_dispatcher.so 00:08:24.145 LIB libspdk_blob.a 00:08:24.145 SO libspdk_blob.so.11.0 00:08:24.403 SYMLINK libspdk_blob.so 00:08:24.661 CC lib/blobfs/blobfs.o 00:08:24.661 CC lib/blobfs/tree.o 00:08:24.661 CC lib/lvol/lvol.o 00:08:25.627 LIB libspdk_bdev.a 00:08:25.627 SO libspdk_bdev.so.17.0 00:08:25.885 SYMLINK libspdk_bdev.so 00:08:25.885 LIB libspdk_blobfs.a 00:08:26.143 CC lib/nvmf/ctrlr.o 00:08:26.143 CC lib/nvmf/ctrlr_discovery.o 00:08:26.143 CC lib/nvmf/ctrlr_bdev.o 00:08:26.143 CC lib/nbd/nbd.o 00:08:26.143 CC lib/ftl/ftl_core.o 00:08:26.143 SO libspdk_blobfs.so.10.0 00:08:26.143 CC lib/ublk/ublk.o 00:08:26.143 CC lib/ublk/ublk_rpc.o 00:08:26.143 CC lib/scsi/dev.o 00:08:26.143 LIB libspdk_lvol.a 00:08:26.143 SO libspdk_lvol.so.10.0 00:08:26.143 SYMLINK libspdk_blobfs.so 00:08:26.143 CC lib/nbd/nbd_rpc.o 00:08:26.143 SYMLINK libspdk_lvol.so 00:08:26.143 CC lib/scsi/lun.o 00:08:26.400 CC lib/scsi/port.o 00:08:26.658 CC lib/ftl/ftl_init.o 00:08:26.658 CC lib/scsi/scsi.o 00:08:26.658 CC lib/nvmf/subsystem.o 00:08:26.658 CC lib/scsi/scsi_bdev.o 00:08:26.658 CC lib/scsi/scsi_pr.o 00:08:26.916 CC lib/scsi/scsi_rpc.o 00:08:26.916 CC lib/ftl/ftl_layout.o 00:08:26.916 CC lib/scsi/task.o 00:08:26.916 LIB libspdk_nbd.a 00:08:27.173 SO libspdk_nbd.so.7.0 00:08:27.173 CC lib/ftl/ftl_debug.o 00:08:27.173 SYMLINK libspdk_nbd.so 00:08:27.173 CC lib/ftl/ftl_io.o 00:08:27.174 CC lib/nvmf/nvmf.o 00:08:27.174 LIB libspdk_ublk.a 00:08:27.431 CC lib/nvmf/nvmf_rpc.o 00:08:27.431 SO libspdk_ublk.so.3.0 00:08:27.431 LIB libspdk_scsi.a 00:08:27.431 CC lib/nvmf/transport.o 00:08:27.431 CC lib/nvmf/tcp.o 00:08:27.431 SO libspdk_scsi.so.9.0 00:08:27.431 SYMLINK libspdk_ublk.so 00:08:27.431 CC lib/nvmf/stubs.o 00:08:27.431 CC lib/ftl/ftl_sb.o 00:08:27.431 CC lib/ftl/ftl_l2p.o 00:08:27.431 SYMLINK libspdk_scsi.so 00:08:27.431 CC lib/ftl/ftl_l2p_flat.o 00:08:27.689 CC lib/ftl/ftl_nv_cache.o 00:08:27.689 CC lib/ftl/ftl_band.o 00:08:27.689 CC lib/ftl/ftl_band_ops.o 00:08:28.254 CC lib/nvmf/mdns_server.o 00:08:28.255 CC lib/ftl/ftl_writer.o 00:08:28.512 CC lib/nvmf/rdma.o 00:08:28.512 CC lib/nvmf/auth.o 00:08:28.512 CC lib/ftl/ftl_rq.o 00:08:28.512 CC lib/ftl/ftl_reloc.o 00:08:28.770 CC lib/ftl/ftl_l2p_cache.o 00:08:28.770 CC lib/iscsi/conn.o 00:08:28.770 CC lib/vhost/vhost.o 00:08:28.770 CC lib/vhost/vhost_rpc.o 00:08:29.027 CC lib/vhost/vhost_scsi.o 00:08:29.284 CC lib/vhost/vhost_blk.o 00:08:29.542 CC lib/vhost/rte_vhost_user.o 00:08:29.542 CC lib/iscsi/init_grp.o 00:08:29.542 CC lib/iscsi/iscsi.o 00:08:29.542 CC lib/iscsi/param.o 00:08:29.800 CC lib/iscsi/portal_grp.o 00:08:30.059 CC lib/ftl/ftl_p2l.o 00:08:30.059 CC lib/ftl/ftl_p2l_log.o 00:08:30.059 CC lib/ftl/mngt/ftl_mngt.o 00:08:30.317 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:30.317 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:30.317 CC lib/iscsi/tgt_node.o 00:08:30.575 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:30.575 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:30.575 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:30.575 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:30.575 CC lib/iscsi/iscsi_subsystem.o 00:08:30.834 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:30.834 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:30.834 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:30.834 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:30.834 CC lib/iscsi/iscsi_rpc.o 00:08:30.834 LIB libspdk_vhost.a 00:08:30.834 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:31.093 SO libspdk_vhost.so.8.0 00:08:31.093 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:31.093 CC lib/iscsi/task.o 00:08:31.093 SYMLINK libspdk_vhost.so 00:08:31.093 CC lib/ftl/utils/ftl_conf.o 00:08:31.366 CC lib/ftl/utils/ftl_md.o 00:08:31.366 CC lib/ftl/utils/ftl_mempool.o 00:08:31.366 CC lib/ftl/utils/ftl_bitmap.o 00:08:31.366 LIB libspdk_nvmf.a 00:08:31.366 CC lib/ftl/utils/ftl_property.o 00:08:31.366 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:31.366 SO libspdk_nvmf.so.19.0 00:08:31.366 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:31.366 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:31.625 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:31.625 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:31.625 LIB libspdk_iscsi.a 00:08:31.625 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:31.625 SO libspdk_iscsi.so.8.0 00:08:31.625 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:08:31.625 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:31.625 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:31.625 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:31.625 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:31.884 SYMLINK libspdk_nvmf.so 00:08:31.884 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:08:31.884 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:08:31.884 CC lib/ftl/base/ftl_base_dev.o 00:08:31.884 SYMLINK libspdk_iscsi.so 00:08:31.884 CC lib/ftl/base/ftl_base_bdev.o 00:08:31.884 CC lib/ftl/ftl_trace.o 00:08:32.453 LIB libspdk_ftl.a 00:08:32.453 SO libspdk_ftl.so.9.0 00:08:33.021 SYMLINK libspdk_ftl.so 00:08:33.279 CC module/env_dpdk/env_dpdk_rpc.o 00:08:33.541 CC module/blob/bdev/blob_bdev.o 00:08:33.541 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:33.541 CC module/keyring/file/keyring.o 00:08:33.541 CC module/sock/posix/posix.o 00:08:33.541 CC module/scheduler/gscheduler/gscheduler.o 00:08:33.541 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:33.541 CC module/keyring/linux/keyring.o 00:08:33.541 CC module/accel/error/accel_error.o 00:08:33.541 CC module/fsdev/aio/fsdev_aio.o 00:08:33.541 LIB libspdk_env_dpdk_rpc.a 00:08:33.541 SO libspdk_env_dpdk_rpc.so.6.0 00:08:33.541 SYMLINK libspdk_env_dpdk_rpc.so 00:08:33.541 CC module/fsdev/aio/fsdev_aio_rpc.o 00:08:33.541 CC module/keyring/linux/keyring_rpc.o 00:08:33.541 CC module/keyring/file/keyring_rpc.o 00:08:33.541 LIB libspdk_scheduler_gscheduler.a 00:08:33.541 LIB libspdk_scheduler_dpdk_governor.a 00:08:33.541 SO libspdk_scheduler_gscheduler.so.4.0 00:08:33.541 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:33.800 CC module/accel/error/accel_error_rpc.o 00:08:33.800 LIB libspdk_scheduler_dynamic.a 00:08:33.800 SO libspdk_scheduler_dynamic.so.4.0 00:08:33.800 SYMLINK libspdk_scheduler_gscheduler.so 00:08:33.800 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:33.800 CC module/fsdev/aio/linux_aio_mgr.o 00:08:33.800 LIB libspdk_keyring_linux.a 00:08:33.800 LIB libspdk_keyring_file.a 00:08:33.800 SYMLINK libspdk_scheduler_dynamic.so 00:08:33.800 SO libspdk_keyring_linux.so.1.0 00:08:33.800 SO libspdk_keyring_file.so.2.0 00:08:33.800 LIB libspdk_accel_error.a 00:08:33.800 SYMLINK libspdk_keyring_linux.so 00:08:33.800 SYMLINK libspdk_keyring_file.so 00:08:33.800 SO libspdk_accel_error.so.2.0 00:08:33.800 CC module/accel/ioat/accel_ioat.o 00:08:33.800 CC module/accel/ioat/accel_ioat_rpc.o 00:08:34.059 CC module/accel/dsa/accel_dsa.o 00:08:34.059 SYMLINK libspdk_accel_error.so 00:08:34.059 CC module/accel/iaa/accel_iaa.o 00:08:34.059 CC module/accel/dsa/accel_dsa_rpc.o 00:08:34.059 CC module/accel/iaa/accel_iaa_rpc.o 00:08:34.059 LIB libspdk_blob_bdev.a 00:08:34.059 SO libspdk_blob_bdev.so.11.0 00:08:34.059 SYMLINK libspdk_blob_bdev.so 00:08:34.317 LIB libspdk_accel_ioat.a 00:08:34.317 SO libspdk_accel_ioat.so.6.0 00:08:34.317 LIB libspdk_accel_iaa.a 00:08:34.317 SYMLINK libspdk_accel_ioat.so 00:08:34.317 SO libspdk_accel_iaa.so.3.0 00:08:34.317 CC module/bdev/delay/vbdev_delay.o 00:08:34.317 CC module/bdev/lvol/vbdev_lvol.o 00:08:34.317 CC module/bdev/gpt/gpt.o 00:08:34.317 CC module/blobfs/bdev/blobfs_bdev.o 00:08:34.317 CC module/bdev/error/vbdev_error.o 00:08:34.574 SYMLINK libspdk_accel_iaa.so 00:08:34.574 LIB libspdk_fsdev_aio.a 00:08:34.574 LIB libspdk_accel_dsa.a 00:08:34.574 SO libspdk_accel_dsa.so.5.0 00:08:34.574 SO libspdk_fsdev_aio.so.1.0 00:08:34.574 LIB libspdk_sock_posix.a 00:08:34.574 CC module/bdev/malloc/bdev_malloc.o 00:08:34.574 SO libspdk_sock_posix.so.6.0 00:08:34.574 SYMLINK libspdk_accel_dsa.so 00:08:34.574 SYMLINK libspdk_fsdev_aio.so 00:08:34.574 CC module/bdev/gpt/vbdev_gpt.o 00:08:34.574 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:34.574 CC module/bdev/null/bdev_null.o 00:08:34.832 SYMLINK libspdk_sock_posix.so 00:08:34.832 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:34.832 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:34.832 CC module/bdev/nvme/bdev_nvme.o 00:08:34.832 CC module/bdev/error/vbdev_error_rpc.o 00:08:34.832 LIB libspdk_bdev_delay.a 00:08:35.090 LIB libspdk_blobfs_bdev.a 00:08:35.090 SO libspdk_bdev_delay.so.6.0 00:08:35.090 LIB libspdk_bdev_gpt.a 00:08:35.090 SO libspdk_blobfs_bdev.so.6.0 00:08:35.090 SO libspdk_bdev_gpt.so.6.0 00:08:35.090 CC module/bdev/passthru/vbdev_passthru.o 00:08:35.090 CC module/bdev/raid/bdev_raid.o 00:08:35.090 SYMLINK libspdk_bdev_delay.so 00:08:35.090 CC module/bdev/null/bdev_null_rpc.o 00:08:35.090 CC module/bdev/raid/bdev_raid_rpc.o 00:08:35.090 SYMLINK libspdk_blobfs_bdev.so 00:08:35.090 LIB libspdk_bdev_malloc.a 00:08:35.090 SYMLINK libspdk_bdev_gpt.so 00:08:35.090 SO libspdk_bdev_malloc.so.6.0 00:08:35.090 LIB libspdk_bdev_error.a 00:08:35.090 SO libspdk_bdev_error.so.6.0 00:08:35.090 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:35.347 SYMLINK libspdk_bdev_malloc.so 00:08:35.347 LIB libspdk_bdev_null.a 00:08:35.347 CC module/bdev/split/vbdev_split.o 00:08:35.347 SYMLINK libspdk_bdev_error.so 00:08:35.347 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:35.347 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:35.347 SO libspdk_bdev_null.so.6.0 00:08:35.347 CC module/bdev/raid/bdev_raid_sb.o 00:08:35.347 SYMLINK libspdk_bdev_null.so 00:08:35.347 CC module/bdev/aio/bdev_aio.o 00:08:35.606 LIB libspdk_bdev_passthru.a 00:08:35.606 SO libspdk_bdev_passthru.so.6.0 00:08:35.606 CC module/bdev/split/vbdev_split_rpc.o 00:08:35.606 CC module/bdev/ftl/bdev_ftl.o 00:08:35.606 SYMLINK libspdk_bdev_passthru.so 00:08:35.606 LIB libspdk_bdev_lvol.a 00:08:35.606 CC module/bdev/iscsi/bdev_iscsi.o 00:08:35.864 SO libspdk_bdev_lvol.so.6.0 00:08:35.864 CC module/bdev/raid/raid0.o 00:08:35.864 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:35.864 LIB libspdk_bdev_split.a 00:08:35.864 SYMLINK libspdk_bdev_lvol.so 00:08:35.864 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:35.864 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:35.864 SO libspdk_bdev_split.so.6.0 00:08:35.864 CC module/bdev/aio/bdev_aio_rpc.o 00:08:35.864 LIB libspdk_bdev_zone_block.a 00:08:36.122 SYMLINK libspdk_bdev_split.so 00:08:36.122 CC module/bdev/nvme/nvme_rpc.o 00:08:36.122 SO libspdk_bdev_zone_block.so.6.0 00:08:36.122 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:36.122 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:36.122 SYMLINK libspdk_bdev_zone_block.so 00:08:36.122 CC module/bdev/raid/raid1.o 00:08:36.122 LIB libspdk_bdev_aio.a 00:08:36.122 SO libspdk_bdev_aio.so.6.0 00:08:36.380 CC module/bdev/raid/concat.o 00:08:36.380 LIB libspdk_bdev_iscsi.a 00:08:36.380 SYMLINK libspdk_bdev_aio.so 00:08:36.380 CC module/bdev/nvme/bdev_mdns_client.o 00:08:36.380 SO libspdk_bdev_iscsi.so.6.0 00:08:36.380 LIB libspdk_bdev_ftl.a 00:08:36.380 SO libspdk_bdev_ftl.so.6.0 00:08:36.380 CC module/bdev/nvme/vbdev_opal.o 00:08:36.380 SYMLINK libspdk_bdev_iscsi.so 00:08:36.380 CC module/bdev/raid/raid5f.o 00:08:36.380 SYMLINK libspdk_bdev_ftl.so 00:08:36.380 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:36.639 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:36.639 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:36.639 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:36.898 LIB libspdk_bdev_virtio.a 00:08:36.898 SO libspdk_bdev_virtio.so.6.0 00:08:37.156 SYMLINK libspdk_bdev_virtio.so 00:08:37.156 LIB libspdk_bdev_raid.a 00:08:37.156 SO libspdk_bdev_raid.so.6.0 00:08:37.414 SYMLINK libspdk_bdev_raid.so 00:08:38.347 LIB libspdk_bdev_nvme.a 00:08:38.347 SO libspdk_bdev_nvme.so.7.0 00:08:38.605 SYMLINK libspdk_bdev_nvme.so 00:08:39.171 CC module/event/subsystems/sock/sock.o 00:08:39.171 CC module/event/subsystems/iobuf/iobuf.o 00:08:39.171 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:39.171 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:39.171 CC module/event/subsystems/scheduler/scheduler.o 00:08:39.171 CC module/event/subsystems/vmd/vmd.o 00:08:39.171 CC module/event/subsystems/keyring/keyring.o 00:08:39.171 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:39.171 CC module/event/subsystems/fsdev/fsdev.o 00:08:39.171 LIB libspdk_event_keyring.a 00:08:39.171 LIB libspdk_event_sock.a 00:08:39.171 SO libspdk_event_keyring.so.1.0 00:08:39.171 SO libspdk_event_sock.so.5.0 00:08:39.429 LIB libspdk_event_iobuf.a 00:08:39.429 LIB libspdk_event_vmd.a 00:08:39.429 LIB libspdk_event_scheduler.a 00:08:39.429 LIB libspdk_event_vhost_blk.a 00:08:39.429 SO libspdk_event_scheduler.so.4.0 00:08:39.429 SYMLINK libspdk_event_sock.so 00:08:39.429 SO libspdk_event_vmd.so.6.0 00:08:39.429 SO libspdk_event_iobuf.so.3.0 00:08:39.429 LIB libspdk_event_fsdev.a 00:08:39.429 SYMLINK libspdk_event_keyring.so 00:08:39.429 SO libspdk_event_vhost_blk.so.3.0 00:08:39.429 SO libspdk_event_fsdev.so.1.0 00:08:39.429 SYMLINK libspdk_event_iobuf.so 00:08:39.429 SYMLINK libspdk_event_vmd.so 00:08:39.429 SYMLINK libspdk_event_vhost_blk.so 00:08:39.429 SYMLINK libspdk_event_scheduler.so 00:08:39.429 SYMLINK libspdk_event_fsdev.so 00:08:39.687 CC module/event/subsystems/accel/accel.o 00:08:39.946 LIB libspdk_event_accel.a 00:08:39.946 SO libspdk_event_accel.so.6.0 00:08:39.946 SYMLINK libspdk_event_accel.so 00:08:40.204 CC module/event/subsystems/bdev/bdev.o 00:08:40.462 LIB libspdk_event_bdev.a 00:08:40.462 SO libspdk_event_bdev.so.6.0 00:08:40.462 SYMLINK libspdk_event_bdev.so 00:08:40.720 CC module/event/subsystems/nbd/nbd.o 00:08:40.720 CC module/event/subsystems/ublk/ublk.o 00:08:40.720 CC module/event/subsystems/scsi/scsi.o 00:08:40.720 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:40.720 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:40.979 LIB libspdk_event_nbd.a 00:08:40.979 SO libspdk_event_nbd.so.6.0 00:08:40.979 LIB libspdk_event_ublk.a 00:08:40.979 SYMLINK libspdk_event_nbd.so 00:08:40.979 LIB libspdk_event_scsi.a 00:08:40.979 SO libspdk_event_ublk.so.3.0 00:08:41.251 SO libspdk_event_scsi.so.6.0 00:08:41.251 LIB libspdk_event_nvmf.a 00:08:41.251 SYMLINK libspdk_event_ublk.so 00:08:41.251 SO libspdk_event_nvmf.so.6.0 00:08:41.251 SYMLINK libspdk_event_scsi.so 00:08:41.251 SYMLINK libspdk_event_nvmf.so 00:08:41.514 CC module/event/subsystems/iscsi/iscsi.o 00:08:41.514 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:41.823 LIB libspdk_event_vhost_scsi.a 00:08:41.823 LIB libspdk_event_iscsi.a 00:08:41.823 SO libspdk_event_vhost_scsi.so.3.0 00:08:41.823 SO libspdk_event_iscsi.so.6.0 00:08:41.823 SYMLINK libspdk_event_iscsi.so 00:08:41.823 SYMLINK libspdk_event_vhost_scsi.so 00:08:42.080 SO libspdk.so.6.0 00:08:42.080 SYMLINK libspdk.so 00:08:42.338 TEST_HEADER include/spdk/accel.h 00:08:42.338 CC app/trace_record/trace_record.o 00:08:42.338 TEST_HEADER include/spdk/accel_module.h 00:08:42.338 CXX app/trace/trace.o 00:08:42.338 TEST_HEADER include/spdk/assert.h 00:08:42.338 TEST_HEADER include/spdk/barrier.h 00:08:42.338 TEST_HEADER include/spdk/base64.h 00:08:42.338 TEST_HEADER include/spdk/bdev.h 00:08:42.338 TEST_HEADER include/spdk/bdev_module.h 00:08:42.338 TEST_HEADER include/spdk/bdev_zone.h 00:08:42.338 TEST_HEADER include/spdk/bit_array.h 00:08:42.338 TEST_HEADER include/spdk/bit_pool.h 00:08:42.338 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:42.338 TEST_HEADER include/spdk/blob_bdev.h 00:08:42.338 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:42.338 TEST_HEADER include/spdk/blobfs.h 00:08:42.338 TEST_HEADER include/spdk/blob.h 00:08:42.338 TEST_HEADER include/spdk/conf.h 00:08:42.338 TEST_HEADER include/spdk/config.h 00:08:42.338 TEST_HEADER include/spdk/cpuset.h 00:08:42.338 TEST_HEADER include/spdk/crc16.h 00:08:42.338 TEST_HEADER include/spdk/crc32.h 00:08:42.338 TEST_HEADER include/spdk/crc64.h 00:08:42.338 TEST_HEADER include/spdk/dif.h 00:08:42.338 TEST_HEADER include/spdk/dma.h 00:08:42.338 TEST_HEADER include/spdk/endian.h 00:08:42.338 TEST_HEADER include/spdk/env_dpdk.h 00:08:42.338 TEST_HEADER include/spdk/env.h 00:08:42.338 TEST_HEADER include/spdk/event.h 00:08:42.338 TEST_HEADER include/spdk/fd_group.h 00:08:42.338 TEST_HEADER include/spdk/fd.h 00:08:42.338 TEST_HEADER include/spdk/file.h 00:08:42.338 TEST_HEADER include/spdk/fsdev.h 00:08:42.338 TEST_HEADER include/spdk/fsdev_module.h 00:08:42.338 TEST_HEADER include/spdk/ftl.h 00:08:42.338 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:42.338 CC test/thread/poller_perf/poller_perf.o 00:08:42.338 TEST_HEADER include/spdk/gpt_spec.h 00:08:42.338 CC examples/util/zipf/zipf.o 00:08:42.338 TEST_HEADER include/spdk/hexlify.h 00:08:42.338 TEST_HEADER include/spdk/histogram_data.h 00:08:42.338 TEST_HEADER include/spdk/idxd.h 00:08:42.338 TEST_HEADER include/spdk/idxd_spec.h 00:08:42.338 TEST_HEADER include/spdk/init.h 00:08:42.338 TEST_HEADER include/spdk/ioat.h 00:08:42.338 CC examples/ioat/perf/perf.o 00:08:42.338 TEST_HEADER include/spdk/ioat_spec.h 00:08:42.338 TEST_HEADER include/spdk/iscsi_spec.h 00:08:42.338 TEST_HEADER include/spdk/json.h 00:08:42.338 TEST_HEADER include/spdk/jsonrpc.h 00:08:42.338 CC test/dma/test_dma/test_dma.o 00:08:42.338 TEST_HEADER include/spdk/keyring.h 00:08:42.338 TEST_HEADER include/spdk/keyring_module.h 00:08:42.338 TEST_HEADER include/spdk/likely.h 00:08:42.338 TEST_HEADER include/spdk/log.h 00:08:42.338 TEST_HEADER include/spdk/lvol.h 00:08:42.338 CC test/app/bdev_svc/bdev_svc.o 00:08:42.338 TEST_HEADER include/spdk/md5.h 00:08:42.338 TEST_HEADER include/spdk/memory.h 00:08:42.338 TEST_HEADER include/spdk/mmio.h 00:08:42.338 TEST_HEADER include/spdk/nbd.h 00:08:42.338 TEST_HEADER include/spdk/net.h 00:08:42.597 TEST_HEADER include/spdk/notify.h 00:08:42.597 TEST_HEADER include/spdk/nvme.h 00:08:42.597 TEST_HEADER include/spdk/nvme_intel.h 00:08:42.597 CC test/env/mem_callbacks/mem_callbacks.o 00:08:42.597 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:42.597 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:42.597 TEST_HEADER include/spdk/nvme_spec.h 00:08:42.597 TEST_HEADER include/spdk/nvme_zns.h 00:08:42.597 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:42.597 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:42.597 TEST_HEADER include/spdk/nvmf.h 00:08:42.597 TEST_HEADER include/spdk/nvmf_spec.h 00:08:42.597 TEST_HEADER include/spdk/nvmf_transport.h 00:08:42.597 TEST_HEADER include/spdk/opal.h 00:08:42.597 TEST_HEADER include/spdk/opal_spec.h 00:08:42.597 TEST_HEADER include/spdk/pci_ids.h 00:08:42.597 TEST_HEADER include/spdk/pipe.h 00:08:42.597 TEST_HEADER include/spdk/queue.h 00:08:42.597 TEST_HEADER include/spdk/reduce.h 00:08:42.597 TEST_HEADER include/spdk/rpc.h 00:08:42.597 TEST_HEADER include/spdk/scheduler.h 00:08:42.597 LINK zipf 00:08:42.597 TEST_HEADER include/spdk/scsi.h 00:08:42.597 TEST_HEADER include/spdk/scsi_spec.h 00:08:42.597 TEST_HEADER include/spdk/sock.h 00:08:42.597 TEST_HEADER include/spdk/stdinc.h 00:08:42.597 TEST_HEADER include/spdk/string.h 00:08:42.597 TEST_HEADER include/spdk/thread.h 00:08:42.597 TEST_HEADER include/spdk/trace.h 00:08:42.597 LINK interrupt_tgt 00:08:42.597 TEST_HEADER include/spdk/trace_parser.h 00:08:42.597 TEST_HEADER include/spdk/tree.h 00:08:42.597 TEST_HEADER include/spdk/ublk.h 00:08:42.597 TEST_HEADER include/spdk/util.h 00:08:42.597 TEST_HEADER include/spdk/uuid.h 00:08:42.597 TEST_HEADER include/spdk/version.h 00:08:42.597 LINK bdev_svc 00:08:42.597 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:42.597 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:42.597 TEST_HEADER include/spdk/vhost.h 00:08:42.597 TEST_HEADER include/spdk/vmd.h 00:08:42.597 TEST_HEADER include/spdk/xor.h 00:08:42.597 TEST_HEADER include/spdk/zipf.h 00:08:42.597 CXX test/cpp_headers/accel.o 00:08:42.854 LINK poller_perf 00:08:42.854 LINK spdk_trace_record 00:08:42.854 LINK spdk_trace 00:08:42.854 LINK ioat_perf 00:08:42.854 CC examples/ioat/verify/verify.o 00:08:43.112 CXX test/cpp_headers/accel_module.o 00:08:43.112 LINK test_dma 00:08:43.112 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:43.112 CC test/app/histogram_perf/histogram_perf.o 00:08:43.112 CC test/env/vtophys/vtophys.o 00:08:43.112 CC test/env/memory/memory_ut.o 00:08:43.112 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:43.112 CC app/nvmf_tgt/nvmf_main.o 00:08:43.369 CXX test/cpp_headers/assert.o 00:08:43.369 LINK histogram_perf 00:08:43.369 LINK vtophys 00:08:43.369 LINK verify 00:08:43.627 CXX test/cpp_headers/barrier.o 00:08:43.627 LINK env_dpdk_post_init 00:08:43.627 LINK nvmf_tgt 00:08:43.627 CC test/env/pci/pci_ut.o 00:08:43.627 LINK nvme_fuzz 00:08:43.627 CC test/rpc_client/rpc_client_test.o 00:08:43.627 LINK mem_callbacks 00:08:43.627 CXX test/cpp_headers/base64.o 00:08:43.885 CC examples/thread/thread/thread_ex.o 00:08:43.885 CC test/accel/dif/dif.o 00:08:43.885 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:43.885 CC app/iscsi_tgt/iscsi_tgt.o 00:08:44.143 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:44.143 LINK rpc_client_test 00:08:44.143 CXX test/cpp_headers/bdev.o 00:08:44.143 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:44.143 LINK thread 00:08:44.143 LINK iscsi_tgt 00:08:44.143 CC test/blobfs/mkfs/mkfs.o 00:08:44.400 CXX test/cpp_headers/bdev_module.o 00:08:44.400 LINK pci_ut 00:08:44.657 CXX test/cpp_headers/bdev_zone.o 00:08:44.657 CC app/spdk_tgt/spdk_tgt.o 00:08:44.657 LINK mkfs 00:08:44.657 CC test/event/event_perf/event_perf.o 00:08:44.657 LINK vhost_fuzz 00:08:44.657 CC examples/sock/hello_world/hello_sock.o 00:08:44.657 CXX test/cpp_headers/bit_array.o 00:08:44.915 LINK event_perf 00:08:44.915 CXX test/cpp_headers/bit_pool.o 00:08:45.171 LINK hello_sock 00:08:45.171 LINK spdk_tgt 00:08:45.171 CC examples/vmd/lsvmd/lsvmd.o 00:08:45.171 CC test/event/reactor/reactor.o 00:08:45.171 CC examples/vmd/led/led.o 00:08:45.171 CXX test/cpp_headers/blob_bdev.o 00:08:45.171 LINK dif 00:08:45.171 LINK reactor 00:08:45.171 LINK lsvmd 00:08:45.469 LINK led 00:08:45.469 CC test/lvol/esnap/esnap.o 00:08:45.469 CC test/event/reactor_perf/reactor_perf.o 00:08:45.469 LINK memory_ut 00:08:45.469 CXX test/cpp_headers/blobfs_bdev.o 00:08:45.729 LINK reactor_perf 00:08:45.729 CC test/nvme/aer/aer.o 00:08:45.729 CXX test/cpp_headers/blobfs.o 00:08:45.729 CC test/nvme/reset/reset.o 00:08:45.729 CC app/spdk_lspci/spdk_lspci.o 00:08:45.988 CC examples/idxd/perf/perf.o 00:08:45.988 LINK spdk_lspci 00:08:45.988 CXX test/cpp_headers/blob.o 00:08:45.988 CC test/event/app_repeat/app_repeat.o 00:08:45.988 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:45.988 CC examples/accel/perf/accel_perf.o 00:08:45.988 LINK reset 00:08:45.988 LINK aer 00:08:46.246 LINK app_repeat 00:08:46.246 CC app/spdk_nvme_perf/perf.o 00:08:46.246 CXX test/cpp_headers/conf.o 00:08:46.246 LINK hello_fsdev 00:08:46.505 CC test/nvme/sgl/sgl.o 00:08:46.505 CXX test/cpp_headers/config.o 00:08:46.505 CXX test/cpp_headers/cpuset.o 00:08:46.505 CC examples/blob/hello_world/hello_blob.o 00:08:46.763 CXX test/cpp_headers/crc16.o 00:08:46.763 LINK idxd_perf 00:08:46.763 LINK accel_perf 00:08:46.763 CC test/event/scheduler/scheduler.o 00:08:46.763 LINK sgl 00:08:46.763 CXX test/cpp_headers/crc32.o 00:08:46.763 LINK hello_blob 00:08:46.763 CXX test/cpp_headers/crc64.o 00:08:47.022 CC test/app/jsoncat/jsoncat.o 00:08:47.022 CXX test/cpp_headers/dif.o 00:08:47.022 CC test/app/stub/stub.o 00:08:47.022 LINK scheduler 00:08:47.022 CXX test/cpp_headers/dma.o 00:08:47.022 CC test/nvme/e2edp/nvme_dp.o 00:08:47.281 LINK iscsi_fuzz 00:08:47.281 LINK jsoncat 00:08:47.281 CC examples/blob/cli/blobcli.o 00:08:47.281 LINK stub 00:08:47.281 CXX test/cpp_headers/endian.o 00:08:47.281 CXX test/cpp_headers/env_dpdk.o 00:08:47.281 CC app/spdk_nvme_identify/identify.o 00:08:47.281 CXX test/cpp_headers/env.o 00:08:47.539 CXX test/cpp_headers/event.o 00:08:47.540 LINK spdk_nvme_perf 00:08:47.540 CXX test/cpp_headers/fd_group.o 00:08:47.540 LINK nvme_dp 00:08:47.540 CXX test/cpp_headers/fd.o 00:08:47.540 CXX test/cpp_headers/file.o 00:08:47.540 CXX test/cpp_headers/fsdev.o 00:08:47.540 CXX test/cpp_headers/fsdev_module.o 00:08:47.798 CXX test/cpp_headers/ftl.o 00:08:47.798 CXX test/cpp_headers/fuse_dispatcher.o 00:08:47.798 CC app/spdk_nvme_discover/discovery_aer.o 00:08:47.798 CXX test/cpp_headers/gpt_spec.o 00:08:47.798 CC test/nvme/overhead/overhead.o 00:08:47.798 CC test/nvme/err_injection/err_injection.o 00:08:47.798 LINK blobcli 00:08:48.057 CC test/nvme/startup/startup.o 00:08:48.057 CC test/nvme/reserve/reserve.o 00:08:48.057 LINK spdk_nvme_discover 00:08:48.057 CC test/nvme/simple_copy/simple_copy.o 00:08:48.057 LINK err_injection 00:08:48.057 CXX test/cpp_headers/hexlify.o 00:08:48.057 LINK overhead 00:08:48.315 LINK startup 00:08:48.315 CC examples/nvme/hello_world/hello_world.o 00:08:48.315 LINK reserve 00:08:48.315 CXX test/cpp_headers/histogram_data.o 00:08:48.315 LINK simple_copy 00:08:48.315 CC examples/nvme/reconnect/reconnect.o 00:08:48.315 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:48.315 CC examples/nvme/arbitration/arbitration.o 00:08:48.574 CXX test/cpp_headers/idxd.o 00:08:48.574 LINK spdk_nvme_identify 00:08:48.574 CC examples/nvme/hotplug/hotplug.o 00:08:48.574 LINK hello_world 00:08:48.574 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:48.574 CC test/nvme/connect_stress/connect_stress.o 00:08:48.574 CXX test/cpp_headers/idxd_spec.o 00:08:48.832 LINK reconnect 00:08:48.832 CC test/nvme/boot_partition/boot_partition.o 00:08:48.832 LINK connect_stress 00:08:48.832 LINK cmb_copy 00:08:48.832 LINK hotplug 00:08:48.832 CC app/spdk_top/spdk_top.o 00:08:48.832 LINK arbitration 00:08:48.832 CXX test/cpp_headers/init.o 00:08:49.090 CXX test/cpp_headers/ioat.o 00:08:49.090 CXX test/cpp_headers/ioat_spec.o 00:08:49.090 LINK boot_partition 00:08:49.090 LINK nvme_manage 00:08:49.090 CXX test/cpp_headers/iscsi_spec.o 00:08:49.090 CC app/vhost/vhost.o 00:08:49.347 CC test/bdev/bdevio/bdevio.o 00:08:49.347 CXX test/cpp_headers/json.o 00:08:49.347 CC app/spdk_dd/spdk_dd.o 00:08:49.347 CC test/nvme/compliance/nvme_compliance.o 00:08:49.347 CC test/nvme/fused_ordering/fused_ordering.o 00:08:49.347 CC examples/nvme/abort/abort.o 00:08:49.348 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:49.348 LINK vhost 00:08:49.348 CXX test/cpp_headers/jsonrpc.o 00:08:49.606 LINK fused_ordering 00:08:49.606 LINK pmr_persistence 00:08:49.606 CXX test/cpp_headers/keyring.o 00:08:49.606 CXX test/cpp_headers/keyring_module.o 00:08:49.606 LINK bdevio 00:08:49.606 LINK nvme_compliance 00:08:49.864 LINK spdk_dd 00:08:49.864 CXX test/cpp_headers/likely.o 00:08:49.864 CXX test/cpp_headers/log.o 00:08:49.864 LINK abort 00:08:49.864 CXX test/cpp_headers/lvol.o 00:08:49.864 CXX test/cpp_headers/md5.o 00:08:49.864 CXX test/cpp_headers/memory.o 00:08:49.864 CXX test/cpp_headers/mmio.o 00:08:50.121 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:50.121 LINK spdk_top 00:08:50.121 CC examples/bdev/hello_world/hello_bdev.o 00:08:50.121 CC examples/bdev/bdevperf/bdevperf.o 00:08:50.121 CC app/fio/nvme/fio_plugin.o 00:08:50.121 CXX test/cpp_headers/nbd.o 00:08:50.121 CXX test/cpp_headers/net.o 00:08:50.381 LINK doorbell_aers 00:08:50.381 CC app/fio/bdev/fio_plugin.o 00:08:50.381 CXX test/cpp_headers/notify.o 00:08:50.381 CC test/nvme/fdp/fdp.o 00:08:50.381 CC test/nvme/cuse/cuse.o 00:08:50.381 LINK hello_bdev 00:08:50.381 CXX test/cpp_headers/nvme.o 00:08:50.381 CXX test/cpp_headers/nvme_intel.o 00:08:50.381 CXX test/cpp_headers/nvme_ocssd.o 00:08:50.640 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:50.640 CXX test/cpp_headers/nvme_spec.o 00:08:50.640 CXX test/cpp_headers/nvme_zns.o 00:08:50.640 CXX test/cpp_headers/nvmf_cmd.o 00:08:50.899 LINK fdp 00:08:50.899 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:50.899 LINK spdk_nvme 00:08:50.899 CXX test/cpp_headers/nvmf.o 00:08:50.899 CXX test/cpp_headers/nvmf_spec.o 00:08:50.899 CXX test/cpp_headers/nvmf_transport.o 00:08:50.899 LINK spdk_bdev 00:08:50.899 CXX test/cpp_headers/opal.o 00:08:50.899 CXX test/cpp_headers/opal_spec.o 00:08:51.157 CXX test/cpp_headers/pci_ids.o 00:08:51.157 CXX test/cpp_headers/pipe.o 00:08:51.157 CXX test/cpp_headers/queue.o 00:08:51.157 CXX test/cpp_headers/reduce.o 00:08:51.157 CXX test/cpp_headers/rpc.o 00:08:51.157 CXX test/cpp_headers/scheduler.o 00:08:51.157 LINK bdevperf 00:08:51.157 CXX test/cpp_headers/scsi.o 00:08:51.157 CXX test/cpp_headers/scsi_spec.o 00:08:51.415 CXX test/cpp_headers/sock.o 00:08:51.415 CXX test/cpp_headers/stdinc.o 00:08:51.415 CXX test/cpp_headers/string.o 00:08:51.415 CXX test/cpp_headers/thread.o 00:08:51.415 CXX test/cpp_headers/trace.o 00:08:51.415 CXX test/cpp_headers/trace_parser.o 00:08:51.415 CXX test/cpp_headers/tree.o 00:08:51.415 CXX test/cpp_headers/ublk.o 00:08:51.415 CXX test/cpp_headers/util.o 00:08:51.415 CXX test/cpp_headers/uuid.o 00:08:51.674 CXX test/cpp_headers/version.o 00:08:51.674 CXX test/cpp_headers/vfio_user_pci.o 00:08:51.674 CXX test/cpp_headers/vfio_user_spec.o 00:08:51.674 CXX test/cpp_headers/vhost.o 00:08:51.674 CXX test/cpp_headers/vmd.o 00:08:51.674 CXX test/cpp_headers/xor.o 00:08:51.674 CXX test/cpp_headers/zipf.o 00:08:51.932 CC examples/nvmf/nvmf/nvmf.o 00:08:51.932 LINK cuse 00:08:52.190 LINK nvmf 00:08:53.129 LINK esnap 00:08:53.694 00:08:53.695 real 1m51.341s 00:08:53.695 user 10m25.969s 00:08:53.695 sys 2m0.309s 00:08:53.695 09:08:37 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:08:53.695 09:08:37 make -- common/autotest_common.sh@10 -- $ set +x 00:08:53.695 ************************************ 00:08:53.695 END TEST make 00:08:53.695 ************************************ 00:08:53.695 09:08:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:53.695 09:08:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:53.695 09:08:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:53.695 09:08:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.695 09:08:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:08:53.695 09:08:37 -- pm/common@44 -- $ pid=5234 00:08:53.695 09:08:37 -- pm/common@50 -- $ kill -TERM 5234 00:08:53.695 09:08:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.695 09:08:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:08:53.695 09:08:37 -- pm/common@44 -- $ pid=5236 00:08:53.695 09:08:37 -- pm/common@50 -- $ kill -TERM 5236 00:08:53.695 09:08:37 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:53.695 09:08:37 -- common/autotest_common.sh@1691 -- # lcov --version 00:08:53.695 09:08:37 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:53.953 09:08:37 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:53.953 09:08:37 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.953 09:08:37 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.953 09:08:37 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.953 09:08:37 -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.953 09:08:37 -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.953 09:08:37 -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.953 09:08:37 -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.953 09:08:37 -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.953 09:08:37 -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.953 09:08:37 -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.953 09:08:37 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.953 09:08:37 -- scripts/common.sh@344 -- # case "$op" in 00:08:53.953 09:08:37 -- scripts/common.sh@345 -- # : 1 00:08:53.953 09:08:37 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.953 09:08:37 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.953 09:08:37 -- scripts/common.sh@365 -- # decimal 1 00:08:53.953 09:08:37 -- scripts/common.sh@353 -- # local d=1 00:08:53.953 09:08:37 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.953 09:08:37 -- scripts/common.sh@355 -- # echo 1 00:08:53.953 09:08:37 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.953 09:08:37 -- scripts/common.sh@366 -- # decimal 2 00:08:53.953 09:08:37 -- scripts/common.sh@353 -- # local d=2 00:08:53.953 09:08:37 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.953 09:08:37 -- scripts/common.sh@355 -- # echo 2 00:08:53.953 09:08:37 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.953 09:08:37 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.953 09:08:37 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.953 09:08:37 -- scripts/common.sh@368 -- # return 0 00:08:53.953 09:08:37 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.953 09:08:37 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:53.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.953 --rc genhtml_branch_coverage=1 00:08:53.953 --rc genhtml_function_coverage=1 00:08:53.953 --rc genhtml_legend=1 00:08:53.953 --rc geninfo_all_blocks=1 00:08:53.953 --rc geninfo_unexecuted_blocks=1 00:08:53.953 00:08:53.953 ' 00:08:53.953 09:08:37 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:53.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.953 --rc genhtml_branch_coverage=1 00:08:53.953 --rc genhtml_function_coverage=1 00:08:53.953 --rc genhtml_legend=1 00:08:53.953 --rc geninfo_all_blocks=1 00:08:53.953 --rc geninfo_unexecuted_blocks=1 00:08:53.953 00:08:53.953 ' 00:08:53.953 09:08:37 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:53.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.953 --rc genhtml_branch_coverage=1 00:08:53.953 --rc genhtml_function_coverage=1 00:08:53.953 --rc genhtml_legend=1 00:08:53.953 --rc geninfo_all_blocks=1 00:08:53.953 --rc geninfo_unexecuted_blocks=1 00:08:53.953 00:08:53.953 ' 00:08:53.953 09:08:37 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:53.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.953 --rc genhtml_branch_coverage=1 00:08:53.953 --rc genhtml_function_coverage=1 00:08:53.953 --rc genhtml_legend=1 00:08:53.953 --rc geninfo_all_blocks=1 00:08:53.953 --rc geninfo_unexecuted_blocks=1 00:08:53.953 00:08:53.953 ' 00:08:53.953 09:08:37 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:53.953 09:08:37 -- nvmf/common.sh@7 -- # uname -s 00:08:53.954 09:08:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.954 09:08:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.954 09:08:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.954 09:08:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.954 09:08:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.954 09:08:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.954 09:08:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.954 09:08:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.954 09:08:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.954 09:08:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.954 09:08:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fe6bb936-df87-4d06-be6f-50f757130ba3 00:08:53.954 09:08:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=fe6bb936-df87-4d06-be6f-50f757130ba3 00:08:53.954 09:08:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.954 09:08:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.954 09:08:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:53.954 09:08:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.954 09:08:37 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:53.954 09:08:37 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.954 09:08:37 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.954 09:08:37 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.954 09:08:37 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.954 09:08:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.954 09:08:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.954 09:08:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.954 09:08:37 -- paths/export.sh@5 -- # export PATH 00:08:53.954 09:08:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.954 09:08:37 -- nvmf/common.sh@51 -- # : 0 00:08:53.954 09:08:37 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.954 09:08:37 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.954 09:08:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.954 09:08:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.954 09:08:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.954 09:08:37 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.954 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.954 09:08:37 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.954 09:08:37 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.954 09:08:37 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.954 09:08:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:53.954 09:08:37 -- spdk/autotest.sh@32 -- # uname -s 00:08:53.954 09:08:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:53.954 09:08:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:53.954 09:08:37 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:53.954 09:08:37 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:08:53.954 09:08:37 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:53.954 09:08:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:53.954 09:08:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:53.954 09:08:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:53.954 09:08:37 -- spdk/autotest.sh@48 -- # udevadm_pid=54437 00:08:53.954 09:08:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:53.954 09:08:37 -- pm/common@17 -- # local monitor 00:08:53.954 09:08:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.954 09:08:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:53.954 09:08:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.954 09:08:37 -- pm/common@21 -- # date +%s 00:08:53.954 09:08:37 -- pm/common@25 -- # sleep 1 00:08:53.954 09:08:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728983317 00:08:53.954 09:08:37 -- pm/common@21 -- # date +%s 00:08:53.954 09:08:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728983317 00:08:53.954 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728983317_collect-vmstat.pm.log 00:08:53.954 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728983317_collect-cpu-load.pm.log 00:08:54.889 09:08:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:54.889 09:08:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:54.889 09:08:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:54.889 09:08:38 -- common/autotest_common.sh@10 -- # set +x 00:08:55.147 09:08:38 -- spdk/autotest.sh@59 -- # create_test_list 00:08:55.147 09:08:38 -- common/autotest_common.sh@748 -- # xtrace_disable 00:08:55.147 09:08:38 -- common/autotest_common.sh@10 -- # set +x 00:08:55.147 09:08:38 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:55.147 09:08:38 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:55.147 09:08:38 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:55.147 09:08:38 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:55.147 09:08:38 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:55.147 09:08:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:55.147 09:08:38 -- common/autotest_common.sh@1455 -- # uname 00:08:55.147 09:08:38 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:08:55.147 09:08:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:55.147 09:08:38 -- common/autotest_common.sh@1475 -- # uname 00:08:55.147 09:08:38 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:08:55.147 09:08:38 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:55.147 09:08:38 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:55.147 lcov: LCOV version 1.15 00:08:55.147 09:08:38 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:09:13.327 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:09:13.327 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:09:31.458 09:09:13 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:09:31.458 09:09:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.458 09:09:13 -- common/autotest_common.sh@10 -- # set +x 00:09:31.458 09:09:13 -- spdk/autotest.sh@78 -- # rm -f 00:09:31.458 09:09:13 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:31.458 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:31.458 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:09:31.458 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:09:31.458 09:09:14 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:09:31.458 09:09:14 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:09:31.458 09:09:14 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:09:31.458 09:09:14 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:09:31.458 09:09:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:31.458 09:09:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:09:31.458 09:09:14 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:09:31.458 09:09:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:31.458 09:09:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:31.458 09:09:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:31.458 09:09:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:09:31.458 09:09:14 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:09:31.458 09:09:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:31.458 09:09:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:31.458 09:09:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:31.458 09:09:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:09:31.458 09:09:14 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:09:31.458 09:09:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:09:31.458 09:09:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:31.458 09:09:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:31.458 09:09:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:09:31.458 09:09:14 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:09:31.458 09:09:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:09:31.458 09:09:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:31.458 09:09:14 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:09:31.458 09:09:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:31.458 09:09:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:31.458 09:09:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:09:31.458 09:09:14 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:09:31.458 09:09:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:31.458 No valid GPT data, bailing 00:09:31.458 09:09:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:31.458 09:09:14 -- scripts/common.sh@394 -- # pt= 00:09:31.458 09:09:14 -- scripts/common.sh@395 -- # return 1 00:09:31.458 09:09:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:31.458 1+0 records in 00:09:31.458 1+0 records out 00:09:31.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00354084 s, 296 MB/s 00:09:31.458 09:09:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:31.458 09:09:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:31.458 09:09:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:09:31.458 09:09:14 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:09:31.458 09:09:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:09:31.458 No valid GPT data, bailing 00:09:31.458 09:09:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:09:31.458 09:09:14 -- scripts/common.sh@394 -- # pt= 00:09:31.458 09:09:14 -- scripts/common.sh@395 -- # return 1 00:09:31.458 09:09:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:09:31.458 1+0 records in 00:09:31.458 1+0 records out 00:09:31.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0051747 s, 203 MB/s 00:09:31.458 09:09:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:31.458 09:09:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:31.459 09:09:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:09:31.459 09:09:14 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:09:31.459 09:09:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:09:31.459 No valid GPT data, bailing 00:09:31.459 09:09:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:09:31.459 09:09:14 -- scripts/common.sh@394 -- # pt= 00:09:31.459 09:09:14 -- scripts/common.sh@395 -- # return 1 00:09:31.459 09:09:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:09:31.459 1+0 records in 00:09:31.459 1+0 records out 00:09:31.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00426614 s, 246 MB/s 00:09:31.459 09:09:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:31.459 09:09:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:31.459 09:09:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:09:31.459 09:09:14 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:09:31.459 09:09:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:09:31.459 No valid GPT data, bailing 00:09:31.459 09:09:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:09:31.459 09:09:14 -- scripts/common.sh@394 -- # pt= 00:09:31.459 09:09:14 -- scripts/common.sh@395 -- # return 1 00:09:31.459 09:09:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:09:31.459 1+0 records in 00:09:31.459 1+0 records out 00:09:31.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00492886 s, 213 MB/s 00:09:31.459 09:09:14 -- spdk/autotest.sh@105 -- # sync 00:09:31.459 09:09:14 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:31.459 09:09:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:31.459 09:09:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:33.359 09:09:16 -- spdk/autotest.sh@111 -- # uname -s 00:09:33.359 09:09:16 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:09:33.359 09:09:16 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:09:33.359 09:09:16 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:33.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:33.618 Hugepages 00:09:33.618 node hugesize free / total 00:09:33.618 node0 1048576kB 0 / 0 00:09:33.618 node0 2048kB 0 / 0 00:09:33.618 00:09:33.618 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:33.877 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:09:33.877 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:09:33.877 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:09:33.877 09:09:17 -- spdk/autotest.sh@117 -- # uname -s 00:09:33.877 09:09:17 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:33.877 09:09:17 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:33.877 09:09:17 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:34.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:34.812 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.812 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.812 09:09:18 -- common/autotest_common.sh@1515 -- # sleep 1 00:09:35.785 09:09:19 -- common/autotest_common.sh@1516 -- # bdfs=() 00:09:35.785 09:09:19 -- common/autotest_common.sh@1516 -- # local bdfs 00:09:35.785 09:09:19 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:09:35.785 09:09:19 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:09:35.785 09:09:19 -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:35.785 09:09:19 -- common/autotest_common.sh@1496 -- # local bdfs 00:09:35.785 09:09:19 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:35.786 09:09:19 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:35.786 09:09:19 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:35.786 09:09:19 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:09:35.786 09:09:19 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:35.786 09:09:19 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:36.353 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:36.353 Waiting for block devices as requested 00:09:36.353 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:36.353 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:36.611 09:09:20 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:09:36.611 09:09:20 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:09:36.611 09:09:20 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:09:36.611 09:09:20 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:09:36.611 09:09:20 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:36.611 09:09:20 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:09:36.611 09:09:20 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:36.611 09:09:20 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:09:36.611 09:09:20 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:09:36.611 09:09:20 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:09:36.611 09:09:20 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:09:36.611 09:09:20 -- common/autotest_common.sh@1529 -- # grep oacs 00:09:36.611 09:09:20 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:09:36.611 09:09:20 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:09:36.611 09:09:20 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:09:36.611 09:09:20 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:09:36.611 09:09:20 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:09:36.611 09:09:20 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:09:36.611 09:09:20 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:09:36.611 09:09:20 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:09:36.611 09:09:20 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:09:36.611 09:09:20 -- common/autotest_common.sh@1541 -- # continue 00:09:36.611 09:09:20 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:09:36.611 09:09:20 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:09:36.611 09:09:20 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:09:36.611 09:09:20 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:09:36.611 09:09:20 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:36.611 09:09:20 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:09:36.611 09:09:20 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:36.611 09:09:20 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:09:36.611 09:09:20 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:09:36.611 09:09:20 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:09:36.611 09:09:20 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:09:36.611 09:09:20 -- common/autotest_common.sh@1529 -- # grep oacs 00:09:36.611 09:09:20 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:09:36.611 09:09:20 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:09:36.611 09:09:20 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:09:36.611 09:09:20 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:09:36.611 09:09:20 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:09:36.611 09:09:20 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:09:36.611 09:09:20 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:09:36.611 09:09:20 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:09:36.611 09:09:20 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:09:36.611 09:09:20 -- common/autotest_common.sh@1541 -- # continue 00:09:36.611 09:09:20 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:36.611 09:09:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:36.611 09:09:20 -- common/autotest_common.sh@10 -- # set +x 00:09:36.611 09:09:20 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:36.611 09:09:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:36.611 09:09:20 -- common/autotest_common.sh@10 -- # set +x 00:09:36.611 09:09:20 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:37.177 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.435 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.435 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.435 09:09:21 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:09:37.435 09:09:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:37.435 09:09:21 -- common/autotest_common.sh@10 -- # set +x 00:09:37.435 09:09:21 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:09:37.435 09:09:21 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:09:37.435 09:09:21 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:09:37.435 09:09:21 -- common/autotest_common.sh@1561 -- # bdfs=() 00:09:37.435 09:09:21 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:09:37.435 09:09:21 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:09:37.435 09:09:21 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:09:37.435 09:09:21 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:09:37.435 09:09:21 -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:37.435 09:09:21 -- common/autotest_common.sh@1496 -- # local bdfs 00:09:37.435 09:09:21 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:37.435 09:09:21 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:37.435 09:09:21 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:37.693 09:09:21 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:09:37.693 09:09:21 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:37.693 09:09:21 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:09:37.693 09:09:21 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:09:37.693 09:09:21 -- common/autotest_common.sh@1564 -- # device=0x0010 00:09:37.693 09:09:21 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:37.693 09:09:21 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:09:37.693 09:09:21 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:09:37.693 09:09:21 -- common/autotest_common.sh@1564 -- # device=0x0010 00:09:37.693 09:09:21 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:37.693 09:09:21 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:09:37.693 09:09:21 -- common/autotest_common.sh@1570 -- # return 0 00:09:37.693 09:09:21 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:09:37.693 09:09:21 -- common/autotest_common.sh@1578 -- # return 0 00:09:37.693 09:09:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:09:37.693 09:09:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:09:37.693 09:09:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:37.693 09:09:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:37.693 09:09:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:09:37.693 09:09:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:37.693 09:09:21 -- common/autotest_common.sh@10 -- # set +x 00:09:37.693 09:09:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:09:37.693 09:09:21 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:37.693 09:09:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:37.693 09:09:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.693 09:09:21 -- common/autotest_common.sh@10 -- # set +x 00:09:37.693 ************************************ 00:09:37.693 START TEST env 00:09:37.693 ************************************ 00:09:37.693 09:09:21 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:37.693 * Looking for test storage... 00:09:37.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:37.693 09:09:21 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:37.693 09:09:21 env -- common/autotest_common.sh@1691 -- # lcov --version 00:09:37.693 09:09:21 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:37.952 09:09:21 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:37.952 09:09:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.952 09:09:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.952 09:09:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.952 09:09:21 env -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.952 09:09:21 env -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.952 09:09:21 env -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.952 09:09:21 env -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.952 09:09:21 env -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.952 09:09:21 env -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.952 09:09:21 env -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.952 09:09:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.952 09:09:21 env -- scripts/common.sh@344 -- # case "$op" in 00:09:37.952 09:09:21 env -- scripts/common.sh@345 -- # : 1 00:09:37.952 09:09:21 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.952 09:09:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.952 09:09:21 env -- scripts/common.sh@365 -- # decimal 1 00:09:37.952 09:09:21 env -- scripts/common.sh@353 -- # local d=1 00:09:37.952 09:09:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.952 09:09:21 env -- scripts/common.sh@355 -- # echo 1 00:09:37.952 09:09:21 env -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.952 09:09:21 env -- scripts/common.sh@366 -- # decimal 2 00:09:37.952 09:09:21 env -- scripts/common.sh@353 -- # local d=2 00:09:37.952 09:09:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.952 09:09:21 env -- scripts/common.sh@355 -- # echo 2 00:09:37.952 09:09:21 env -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.952 09:09:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.952 09:09:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.952 09:09:21 env -- scripts/common.sh@368 -- # return 0 00:09:37.952 09:09:21 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.952 09:09:21 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:37.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.952 --rc genhtml_branch_coverage=1 00:09:37.952 --rc genhtml_function_coverage=1 00:09:37.952 --rc genhtml_legend=1 00:09:37.952 --rc geninfo_all_blocks=1 00:09:37.952 --rc geninfo_unexecuted_blocks=1 00:09:37.952 00:09:37.952 ' 00:09:37.952 09:09:21 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:37.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.952 --rc genhtml_branch_coverage=1 00:09:37.952 --rc genhtml_function_coverage=1 00:09:37.952 --rc genhtml_legend=1 00:09:37.952 --rc geninfo_all_blocks=1 00:09:37.952 --rc geninfo_unexecuted_blocks=1 00:09:37.952 00:09:37.952 ' 00:09:37.952 09:09:21 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:37.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.952 --rc genhtml_branch_coverage=1 00:09:37.952 --rc genhtml_function_coverage=1 00:09:37.952 --rc genhtml_legend=1 00:09:37.952 --rc geninfo_all_blocks=1 00:09:37.952 --rc geninfo_unexecuted_blocks=1 00:09:37.952 00:09:37.952 ' 00:09:37.952 09:09:21 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:37.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.952 --rc genhtml_branch_coverage=1 00:09:37.952 --rc genhtml_function_coverage=1 00:09:37.952 --rc genhtml_legend=1 00:09:37.952 --rc geninfo_all_blocks=1 00:09:37.952 --rc geninfo_unexecuted_blocks=1 00:09:37.952 00:09:37.952 ' 00:09:37.952 09:09:21 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:37.952 09:09:21 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:37.952 09:09:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.952 09:09:21 env -- common/autotest_common.sh@10 -- # set +x 00:09:37.952 ************************************ 00:09:37.952 START TEST env_memory 00:09:37.952 ************************************ 00:09:37.952 09:09:21 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:37.952 00:09:37.952 00:09:37.952 CUnit - A unit testing framework for C - Version 2.1-3 00:09:37.952 http://cunit.sourceforge.net/ 00:09:37.952 00:09:37.952 00:09:37.952 Suite: memory 00:09:37.952 Test: alloc and free memory map ...[2024-10-15 09:09:21.714074] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:37.952 passed 00:09:37.952 Test: mem map translation ...[2024-10-15 09:09:21.762917] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:37.952 [2024-10-15 09:09:21.763335] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:37.952 [2024-10-15 09:09:21.763556] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:37.952 [2024-10-15 09:09:21.763826] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:37.952 passed 00:09:37.952 Test: mem map registration ...[2024-10-15 09:09:21.842164] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:09:37.952 [2024-10-15 09:09:21.842506] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:09:37.952 passed 00:09:38.210 Test: mem map adjacent registrations ...passed 00:09:38.210 00:09:38.210 Run Summary: Type Total Ran Passed Failed Inactive 00:09:38.210 suites 1 1 n/a 0 0 00:09:38.210 tests 4 4 4 0 0 00:09:38.210 asserts 152 152 152 0 n/a 00:09:38.210 00:09:38.210 Elapsed time = 0.273 seconds 00:09:38.210 00:09:38.210 real 0m0.321s 00:09:38.210 user 0m0.273s 00:09:38.210 sys 0m0.037s 00:09:38.211 09:09:21 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.211 ************************************ 00:09:38.211 END TEST env_memory 00:09:38.211 ************************************ 00:09:38.211 09:09:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:38.211 09:09:22 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:38.211 09:09:22 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:38.211 09:09:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.211 09:09:22 env -- common/autotest_common.sh@10 -- # set +x 00:09:38.211 ************************************ 00:09:38.211 START TEST env_vtophys 00:09:38.211 ************************************ 00:09:38.211 09:09:22 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:38.211 EAL: lib.eal log level changed from notice to debug 00:09:38.211 EAL: Detected lcore 0 as core 0 on socket 0 00:09:38.211 EAL: Detected lcore 1 as core 0 on socket 0 00:09:38.211 EAL: Detected lcore 2 as core 0 on socket 0 00:09:38.211 EAL: Detected lcore 3 as core 0 on socket 0 00:09:38.211 EAL: Detected lcore 4 as core 0 on socket 0 00:09:38.211 EAL: Detected lcore 5 as core 0 on socket 0 00:09:38.211 EAL: Detected lcore 6 as core 0 on socket 0 00:09:38.211 EAL: Detected lcore 7 as core 0 on socket 0 00:09:38.211 EAL: Detected lcore 8 as core 0 on socket 0 00:09:38.211 EAL: Detected lcore 9 as core 0 on socket 0 00:09:38.211 EAL: Maximum logical cores by configuration: 128 00:09:38.211 EAL: Detected CPU lcores: 10 00:09:38.211 EAL: Detected NUMA nodes: 1 00:09:38.211 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:38.211 EAL: Detected shared linkage of DPDK 00:09:38.211 EAL: No shared files mode enabled, IPC will be disabled 00:09:38.211 EAL: Selected IOVA mode 'PA' 00:09:38.211 EAL: Probing VFIO support... 00:09:38.211 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:38.211 EAL: VFIO modules not loaded, skipping VFIO support... 00:09:38.211 EAL: Ask a virtual area of 0x2e000 bytes 00:09:38.211 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:38.211 EAL: Setting up physically contiguous memory... 00:09:38.211 EAL: Setting maximum number of open files to 524288 00:09:38.211 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:38.211 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:38.211 EAL: Ask a virtual area of 0x61000 bytes 00:09:38.211 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:38.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:38.211 EAL: Ask a virtual area of 0x400000000 bytes 00:09:38.211 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:38.211 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:38.211 EAL: Ask a virtual area of 0x61000 bytes 00:09:38.211 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:38.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:38.211 EAL: Ask a virtual area of 0x400000000 bytes 00:09:38.211 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:38.211 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:38.211 EAL: Ask a virtual area of 0x61000 bytes 00:09:38.211 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:38.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:38.211 EAL: Ask a virtual area of 0x400000000 bytes 00:09:38.211 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:38.211 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:38.211 EAL: Ask a virtual area of 0x61000 bytes 00:09:38.211 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:38.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:38.211 EAL: Ask a virtual area of 0x400000000 bytes 00:09:38.211 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:38.211 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:38.211 EAL: Hugepages will be freed exactly as allocated. 00:09:38.211 EAL: No shared files mode enabled, IPC is disabled 00:09:38.211 EAL: No shared files mode enabled, IPC is disabled 00:09:38.469 EAL: TSC frequency is ~2200000 KHz 00:09:38.469 EAL: Main lcore 0 is ready (tid=7f899ba79a40;cpuset=[0]) 00:09:38.469 EAL: Trying to obtain current memory policy. 00:09:38.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:38.469 EAL: Restoring previous memory policy: 0 00:09:38.469 EAL: request: mp_malloc_sync 00:09:38.469 EAL: No shared files mode enabled, IPC is disabled 00:09:38.469 EAL: Heap on socket 0 was expanded by 2MB 00:09:38.469 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:38.469 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:38.469 EAL: Mem event callback 'spdk:(nil)' registered 00:09:38.469 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:09:38.469 00:09:38.469 00:09:38.469 CUnit - A unit testing framework for C - Version 2.1-3 00:09:38.469 http://cunit.sourceforge.net/ 00:09:38.469 00:09:38.469 00:09:38.469 Suite: components_suite 00:09:39.104 Test: vtophys_malloc_test ...passed 00:09:39.104 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:39.104 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:39.105 EAL: Restoring previous memory policy: 4 00:09:39.105 EAL: Calling mem event callback 'spdk:(nil)' 00:09:39.105 EAL: request: mp_malloc_sync 00:09:39.105 EAL: No shared files mode enabled, IPC is disabled 00:09:39.105 EAL: Heap on socket 0 was expanded by 4MB 00:09:39.105 EAL: Calling mem event callback 'spdk:(nil)' 00:09:39.105 EAL: request: mp_malloc_sync 00:09:39.105 EAL: No shared files mode enabled, IPC is disabled 00:09:39.105 EAL: Heap on socket 0 was shrunk by 4MB 00:09:39.105 EAL: Trying to obtain current memory policy. 00:09:39.105 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:39.105 EAL: Restoring previous memory policy: 4 00:09:39.105 EAL: Calling mem event callback 'spdk:(nil)' 00:09:39.105 EAL: request: mp_malloc_sync 00:09:39.105 EAL: No shared files mode enabled, IPC is disabled 00:09:39.105 EAL: Heap on socket 0 was expanded by 6MB 00:09:39.105 EAL: Calling mem event callback 'spdk:(nil)' 00:09:39.105 EAL: request: mp_malloc_sync 00:09:39.105 EAL: No shared files mode enabled, IPC is disabled 00:09:39.105 EAL: Heap on socket 0 was shrunk by 6MB 00:09:39.105 EAL: Trying to obtain current memory policy. 00:09:39.105 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:39.105 EAL: Restoring previous memory policy: 4 00:09:39.105 EAL: Calling mem event callback 'spdk:(nil)' 00:09:39.105 EAL: request: mp_malloc_sync 00:09:39.105 EAL: No shared files mode enabled, IPC is disabled 00:09:39.105 EAL: Heap on socket 0 was expanded by 10MB 00:09:39.105 EAL: Calling mem event callback 'spdk:(nil)' 00:09:39.105 EAL: request: mp_malloc_sync 00:09:39.105 EAL: No shared files mode enabled, IPC is disabled 00:09:39.105 EAL: Heap on socket 0 was shrunk by 10MB 00:09:39.105 EAL: Trying to obtain current memory policy. 00:09:39.105 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:39.105 EAL: Restoring previous memory policy: 4 00:09:39.105 EAL: Calling mem event callback 'spdk:(nil)' 00:09:39.105 EAL: request: mp_malloc_sync 00:09:39.105 EAL: No shared files mode enabled, IPC is disabled 00:09:39.105 EAL: Heap on socket 0 was expanded by 18MB 00:09:39.105 EAL: Calling mem event callback 'spdk:(nil)' 00:09:39.105 EAL: request: mp_malloc_sync 00:09:39.105 EAL: No shared files mode enabled, IPC is disabled 00:09:39.105 EAL: Heap on socket 0 was shrunk by 18MB 00:09:39.105 EAL: Trying to obtain current memory policy. 00:09:39.105 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:39.105 EAL: Restoring previous memory policy: 4 00:09:39.105 EAL: Calling mem event callback 'spdk:(nil)' 00:09:39.105 EAL: request: mp_malloc_sync 00:09:39.105 EAL: No shared files mode enabled, IPC is disabled 00:09:39.105 EAL: Heap on socket 0 was expanded by 34MB 00:09:39.105 EAL: Calling mem event callback 'spdk:(nil)' 00:09:39.105 EAL: request: mp_malloc_sync 00:09:39.105 EAL: No shared files mode enabled, IPC is disabled 00:09:39.105 EAL: Heap on socket 0 was shrunk by 34MB 00:09:39.363 EAL: Trying to obtain current memory policy. 00:09:39.363 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:39.363 EAL: Restoring previous memory policy: 4 00:09:39.363 EAL: Calling mem event callback 'spdk:(nil)' 00:09:39.363 EAL: request: mp_malloc_sync 00:09:39.363 EAL: No shared files mode enabled, IPC is disabled 00:09:39.363 EAL: Heap on socket 0 was expanded by 66MB 00:09:39.363 EAL: Calling mem event callback 'spdk:(nil)' 00:09:39.363 EAL: request: mp_malloc_sync 00:09:39.363 EAL: No shared files mode enabled, IPC is disabled 00:09:39.363 EAL: Heap on socket 0 was shrunk by 66MB 00:09:39.621 EAL: Trying to obtain current memory policy. 00:09:39.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:39.621 EAL: Restoring previous memory policy: 4 00:09:39.621 EAL: Calling mem event callback 'spdk:(nil)' 00:09:39.621 EAL: request: mp_malloc_sync 00:09:39.621 EAL: No shared files mode enabled, IPC is disabled 00:09:39.621 EAL: Heap on socket 0 was expanded by 130MB 00:09:39.880 EAL: Calling mem event callback 'spdk:(nil)' 00:09:39.880 EAL: request: mp_malloc_sync 00:09:39.880 EAL: No shared files mode enabled, IPC is disabled 00:09:39.880 EAL: Heap on socket 0 was shrunk by 130MB 00:09:39.880 EAL: Trying to obtain current memory policy. 00:09:39.880 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:40.138 EAL: Restoring previous memory policy: 4 00:09:40.138 EAL: Calling mem event callback 'spdk:(nil)' 00:09:40.138 EAL: request: mp_malloc_sync 00:09:40.138 EAL: No shared files mode enabled, IPC is disabled 00:09:40.138 EAL: Heap on socket 0 was expanded by 258MB 00:09:40.396 EAL: Calling mem event callback 'spdk:(nil)' 00:09:40.654 EAL: request: mp_malloc_sync 00:09:40.654 EAL: No shared files mode enabled, IPC is disabled 00:09:40.654 EAL: Heap on socket 0 was shrunk by 258MB 00:09:40.912 EAL: Trying to obtain current memory policy. 00:09:40.912 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:41.170 EAL: Restoring previous memory policy: 4 00:09:41.170 EAL: Calling mem event callback 'spdk:(nil)' 00:09:41.170 EAL: request: mp_malloc_sync 00:09:41.170 EAL: No shared files mode enabled, IPC is disabled 00:09:41.170 EAL: Heap on socket 0 was expanded by 514MB 00:09:42.107 EAL: Calling mem event callback 'spdk:(nil)' 00:09:42.107 EAL: request: mp_malloc_sync 00:09:42.107 EAL: No shared files mode enabled, IPC is disabled 00:09:42.107 EAL: Heap on socket 0 was shrunk by 514MB 00:09:43.044 EAL: Trying to obtain current memory policy. 00:09:43.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:43.302 EAL: Restoring previous memory policy: 4 00:09:43.302 EAL: Calling mem event callback 'spdk:(nil)' 00:09:43.302 EAL: request: mp_malloc_sync 00:09:43.302 EAL: No shared files mode enabled, IPC is disabled 00:09:43.302 EAL: Heap on socket 0 was expanded by 1026MB 00:09:45.206 EAL: Calling mem event callback 'spdk:(nil)' 00:09:45.465 EAL: request: mp_malloc_sync 00:09:45.465 EAL: No shared files mode enabled, IPC is disabled 00:09:45.465 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:46.844 passed 00:09:46.844 00:09:46.844 Run Summary: Type Total Ran Passed Failed Inactive 00:09:46.844 suites 1 1 n/a 0 0 00:09:46.844 tests 2 2 2 0 0 00:09:46.844 asserts 5649 5649 5649 0 n/a 00:09:46.844 00:09:46.844 Elapsed time = 8.336 seconds 00:09:46.844 EAL: Calling mem event callback 'spdk:(nil)' 00:09:46.844 EAL: request: mp_malloc_sync 00:09:46.844 EAL: No shared files mode enabled, IPC is disabled 00:09:46.844 EAL: Heap on socket 0 was shrunk by 2MB 00:09:46.844 EAL: No shared files mode enabled, IPC is disabled 00:09:46.844 EAL: No shared files mode enabled, IPC is disabled 00:09:46.844 EAL: No shared files mode enabled, IPC is disabled 00:09:46.844 00:09:46.844 real 0m8.695s 00:09:46.844 user 0m7.231s 00:09:46.844 sys 0m1.272s 00:09:46.844 ************************************ 00:09:46.844 END TEST env_vtophys 00:09:46.844 ************************************ 00:09:46.844 09:09:30 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.844 09:09:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:46.844 09:09:30 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:46.844 09:09:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:46.844 09:09:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.844 09:09:30 env -- common/autotest_common.sh@10 -- # set +x 00:09:47.103 ************************************ 00:09:47.103 START TEST env_pci 00:09:47.103 ************************************ 00:09:47.103 09:09:30 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:47.103 00:09:47.103 00:09:47.103 CUnit - A unit testing framework for C - Version 2.1-3 00:09:47.103 http://cunit.sourceforge.net/ 00:09:47.103 00:09:47.103 00:09:47.103 Suite: pci 00:09:47.103 Test: pci_hook ...[2024-10-15 09:09:30.813524] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1111:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56778 has claimed it 00:09:47.103 passed 00:09:47.103 00:09:47.103 Run Summary: Type Total Ran Passed Failed Inactive 00:09:47.103 suites 1 1 n/a 0 0 00:09:47.103 tests 1 1 1 0 0 00:09:47.103 asserts 25 25 25 0 n/a 00:09:47.103 00:09:47.103 Elapsed time = 0.008 seconds 00:09:47.103 EAL: Cannot find device (10000:00:01.0) 00:09:47.103 EAL: Failed to attach device on primary process 00:09:47.103 00:09:47.103 real 0m0.083s 00:09:47.103 user 0m0.041s 00:09:47.103 sys 0m0.041s 00:09:47.103 09:09:30 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.103 09:09:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:47.103 ************************************ 00:09:47.103 END TEST env_pci 00:09:47.103 ************************************ 00:09:47.103 09:09:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:47.103 09:09:30 env -- env/env.sh@15 -- # uname 00:09:47.103 09:09:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:47.103 09:09:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:47.103 09:09:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:47.103 09:09:30 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:47.103 09:09:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.103 09:09:30 env -- common/autotest_common.sh@10 -- # set +x 00:09:47.103 ************************************ 00:09:47.103 START TEST env_dpdk_post_init 00:09:47.103 ************************************ 00:09:47.103 09:09:30 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:47.103 EAL: Detected CPU lcores: 10 00:09:47.103 EAL: Detected NUMA nodes: 1 00:09:47.103 EAL: Detected shared linkage of DPDK 00:09:47.103 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:47.103 EAL: Selected IOVA mode 'PA' 00:09:47.377 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:47.377 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:47.377 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:09:47.377 Starting DPDK initialization... 00:09:47.377 Starting SPDK post initialization... 00:09:47.377 SPDK NVMe probe 00:09:47.377 Attaching to 0000:00:10.0 00:09:47.377 Attaching to 0000:00:11.0 00:09:47.377 Attached to 0000:00:10.0 00:09:47.377 Attached to 0000:00:11.0 00:09:47.377 Cleaning up... 00:09:47.377 00:09:47.377 real 0m0.295s 00:09:47.377 user 0m0.091s 00:09:47.377 sys 0m0.103s 00:09:47.377 09:09:31 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.377 09:09:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:47.377 ************************************ 00:09:47.377 END TEST env_dpdk_post_init 00:09:47.377 ************************************ 00:09:47.377 09:09:31 env -- env/env.sh@26 -- # uname 00:09:47.377 09:09:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:47.377 09:09:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:47.377 09:09:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:47.377 09:09:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.377 09:09:31 env -- common/autotest_common.sh@10 -- # set +x 00:09:47.377 ************************************ 00:09:47.377 START TEST env_mem_callbacks 00:09:47.377 ************************************ 00:09:47.377 09:09:31 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:47.660 EAL: Detected CPU lcores: 10 00:09:47.660 EAL: Detected NUMA nodes: 1 00:09:47.660 EAL: Detected shared linkage of DPDK 00:09:47.660 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:47.660 EAL: Selected IOVA mode 'PA' 00:09:47.660 00:09:47.660 00:09:47.660 CUnit - A unit testing framework for C - Version 2.1-3 00:09:47.660 http://cunit.sourceforge.net/ 00:09:47.660 00:09:47.660 00:09:47.660 Suite: memory 00:09:47.660 Test: test ... 00:09:47.660 register 0x200000200000 2097152 00:09:47.660 malloc 3145728 00:09:47.660 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:47.660 register 0x200000400000 4194304 00:09:47.660 buf 0x2000004fffc0 len 3145728 PASSED 00:09:47.660 malloc 64 00:09:47.660 buf 0x2000004ffec0 len 64 PASSED 00:09:47.660 malloc 4194304 00:09:47.660 register 0x200000800000 6291456 00:09:47.660 buf 0x2000009fffc0 len 4194304 PASSED 00:09:47.660 free 0x2000004fffc0 3145728 00:09:47.660 free 0x2000004ffec0 64 00:09:47.660 unregister 0x200000400000 4194304 PASSED 00:09:47.660 free 0x2000009fffc0 4194304 00:09:47.660 unregister 0x200000800000 6291456 PASSED 00:09:47.660 malloc 8388608 00:09:47.660 register 0x200000400000 10485760 00:09:47.660 buf 0x2000005fffc0 len 8388608 PASSED 00:09:47.660 free 0x2000005fffc0 8388608 00:09:47.660 unregister 0x200000400000 10485760 PASSED 00:09:47.660 passed 00:09:47.660 00:09:47.660 Run Summary: Type Total Ran Passed Failed Inactive 00:09:47.660 suites 1 1 n/a 0 0 00:09:47.660 tests 1 1 1 0 0 00:09:47.660 asserts 15 15 15 0 n/a 00:09:47.660 00:09:47.660 Elapsed time = 0.085 seconds 00:09:47.920 00:09:47.920 real 0m0.313s 00:09:47.920 user 0m0.128s 00:09:47.920 sys 0m0.079s 00:09:47.920 09:09:31 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.920 ************************************ 00:09:47.920 END TEST env_mem_callbacks 00:09:47.920 ************************************ 00:09:47.920 09:09:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:47.920 ************************************ 00:09:47.920 END TEST env 00:09:47.920 ************************************ 00:09:47.920 00:09:47.920 real 0m10.191s 00:09:47.920 user 0m7.971s 00:09:47.920 sys 0m1.794s 00:09:47.920 09:09:31 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.920 09:09:31 env -- common/autotest_common.sh@10 -- # set +x 00:09:47.920 09:09:31 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:47.920 09:09:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:47.920 09:09:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.920 09:09:31 -- common/autotest_common.sh@10 -- # set +x 00:09:47.920 ************************************ 00:09:47.920 START TEST rpc 00:09:47.920 ************************************ 00:09:47.920 09:09:31 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:47.920 * Looking for test storage... 00:09:47.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:47.920 09:09:31 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:47.920 09:09:31 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:47.920 09:09:31 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:48.179 09:09:31 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:48.179 09:09:31 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.179 09:09:31 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.179 09:09:31 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.179 09:09:31 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.179 09:09:31 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.179 09:09:31 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.179 09:09:31 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.179 09:09:31 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.179 09:09:31 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.179 09:09:31 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.179 09:09:31 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.179 09:09:31 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:48.179 09:09:31 rpc -- scripts/common.sh@345 -- # : 1 00:09:48.179 09:09:31 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.179 09:09:31 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.179 09:09:31 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:48.179 09:09:31 rpc -- scripts/common.sh@353 -- # local d=1 00:09:48.179 09:09:31 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.179 09:09:31 rpc -- scripts/common.sh@355 -- # echo 1 00:09:48.179 09:09:31 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.179 09:09:31 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:48.179 09:09:31 rpc -- scripts/common.sh@353 -- # local d=2 00:09:48.179 09:09:31 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.179 09:09:31 rpc -- scripts/common.sh@355 -- # echo 2 00:09:48.180 09:09:31 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.180 09:09:31 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.180 09:09:31 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.180 09:09:31 rpc -- scripts/common.sh@368 -- # return 0 00:09:48.180 09:09:31 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.180 09:09:31 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:48.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.180 --rc genhtml_branch_coverage=1 00:09:48.180 --rc genhtml_function_coverage=1 00:09:48.180 --rc genhtml_legend=1 00:09:48.180 --rc geninfo_all_blocks=1 00:09:48.180 --rc geninfo_unexecuted_blocks=1 00:09:48.180 00:09:48.180 ' 00:09:48.180 09:09:31 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:48.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.180 --rc genhtml_branch_coverage=1 00:09:48.180 --rc genhtml_function_coverage=1 00:09:48.180 --rc genhtml_legend=1 00:09:48.180 --rc geninfo_all_blocks=1 00:09:48.180 --rc geninfo_unexecuted_blocks=1 00:09:48.180 00:09:48.180 ' 00:09:48.180 09:09:31 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:48.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.180 --rc genhtml_branch_coverage=1 00:09:48.180 --rc genhtml_function_coverage=1 00:09:48.180 --rc genhtml_legend=1 00:09:48.180 --rc geninfo_all_blocks=1 00:09:48.180 --rc geninfo_unexecuted_blocks=1 00:09:48.180 00:09:48.180 ' 00:09:48.180 09:09:31 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:48.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.180 --rc genhtml_branch_coverage=1 00:09:48.180 --rc genhtml_function_coverage=1 00:09:48.180 --rc genhtml_legend=1 00:09:48.180 --rc geninfo_all_blocks=1 00:09:48.180 --rc geninfo_unexecuted_blocks=1 00:09:48.180 00:09:48.180 ' 00:09:48.180 09:09:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56905 00:09:48.180 09:09:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:48.180 09:09:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56905 00:09:48.180 09:09:31 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:48.180 09:09:31 rpc -- common/autotest_common.sh@831 -- # '[' -z 56905 ']' 00:09:48.180 09:09:31 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.180 09:09:31 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.180 09:09:31 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.180 09:09:31 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.180 09:09:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.180 [2024-10-15 09:09:32.004788] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:09:48.180 [2024-10-15 09:09:32.005246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56905 ] 00:09:48.439 [2024-10-15 09:09:32.177300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.440 [2024-10-15 09:09:32.351483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:48.440 [2024-10-15 09:09:32.351887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56905' to capture a snapshot of events at runtime. 00:09:48.440 [2024-10-15 09:09:32.352145] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.440 [2024-10-15 09:09:32.352387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.440 [2024-10-15 09:09:32.352528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56905 for offline analysis/debug. 00:09:48.440 [2024-10-15 09:09:32.354367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.818 09:09:33 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.818 09:09:33 rpc -- common/autotest_common.sh@864 -- # return 0 00:09:49.818 09:09:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:49.818 09:09:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:49.818 09:09:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:49.818 09:09:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:49.818 09:09:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:49.818 09:09:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.818 09:09:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.818 ************************************ 00:09:49.818 START TEST rpc_integrity 00:09:49.818 ************************************ 00:09:49.818 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:09:49.818 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:49.818 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.818 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:49.818 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.818 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:49.818 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:49.818 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:49.818 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:49.818 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.818 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:49.818 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.818 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:49.818 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:49.818 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.818 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:49.818 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.818 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:49.818 { 00:09:49.818 "name": "Malloc0", 00:09:49.818 "aliases": [ 00:09:49.818 "c6f425ff-8576-4a55-b686-ff4d59d40c24" 00:09:49.818 ], 00:09:49.818 "product_name": "Malloc disk", 00:09:49.818 "block_size": 512, 00:09:49.818 "num_blocks": 16384, 00:09:49.818 "uuid": "c6f425ff-8576-4a55-b686-ff4d59d40c24", 00:09:49.818 "assigned_rate_limits": { 00:09:49.818 "rw_ios_per_sec": 0, 00:09:49.818 "rw_mbytes_per_sec": 0, 00:09:49.818 "r_mbytes_per_sec": 0, 00:09:49.819 "w_mbytes_per_sec": 0 00:09:49.819 }, 00:09:49.819 "claimed": false, 00:09:49.819 "zoned": false, 00:09:49.819 "supported_io_types": { 00:09:49.819 "read": true, 00:09:49.819 "write": true, 00:09:49.819 "unmap": true, 00:09:49.819 "flush": true, 00:09:49.819 "reset": true, 00:09:49.819 "nvme_admin": false, 00:09:49.819 "nvme_io": false, 00:09:49.819 "nvme_io_md": false, 00:09:49.819 "write_zeroes": true, 00:09:49.819 "zcopy": true, 00:09:49.819 "get_zone_info": false, 00:09:49.819 "zone_management": false, 00:09:49.819 "zone_append": false, 00:09:49.819 "compare": false, 00:09:49.819 "compare_and_write": false, 00:09:49.819 "abort": true, 00:09:49.819 "seek_hole": false, 00:09:49.819 "seek_data": false, 00:09:49.819 "copy": true, 00:09:49.819 "nvme_iov_md": false 00:09:49.819 }, 00:09:49.819 "memory_domains": [ 00:09:49.819 { 00:09:49.819 "dma_device_id": "system", 00:09:49.819 "dma_device_type": 1 00:09:49.819 }, 00:09:49.819 { 00:09:49.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.819 "dma_device_type": 2 00:09:49.819 } 00:09:49.819 ], 00:09:49.819 "driver_specific": {} 00:09:49.819 } 00:09:49.819 ]' 00:09:49.819 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:49.819 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:49.819 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:49.819 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.819 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:49.819 [2024-10-15 09:09:33.527855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:49.819 [2024-10-15 09:09:33.528152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.819 [2024-10-15 09:09:33.528204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:49.819 [2024-10-15 09:09:33.528230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.819 [2024-10-15 09:09:33.531766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.819 [2024-10-15 09:09:33.531815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:49.819 Passthru0 00:09:49.819 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.819 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:49.819 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.819 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:49.819 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.819 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:49.819 { 00:09:49.819 "name": "Malloc0", 00:09:49.819 "aliases": [ 00:09:49.819 "c6f425ff-8576-4a55-b686-ff4d59d40c24" 00:09:49.819 ], 00:09:49.819 "product_name": "Malloc disk", 00:09:49.819 "block_size": 512, 00:09:49.819 "num_blocks": 16384, 00:09:49.819 "uuid": "c6f425ff-8576-4a55-b686-ff4d59d40c24", 00:09:49.819 "assigned_rate_limits": { 00:09:49.819 "rw_ios_per_sec": 0, 00:09:49.819 "rw_mbytes_per_sec": 0, 00:09:49.819 "r_mbytes_per_sec": 0, 00:09:49.819 "w_mbytes_per_sec": 0 00:09:49.819 }, 00:09:49.819 "claimed": true, 00:09:49.819 "claim_type": "exclusive_write", 00:09:49.819 "zoned": false, 00:09:49.819 "supported_io_types": { 00:09:49.819 "read": true, 00:09:49.819 "write": true, 00:09:49.819 "unmap": true, 00:09:49.819 "flush": true, 00:09:49.819 "reset": true, 00:09:49.819 "nvme_admin": false, 00:09:49.819 "nvme_io": false, 00:09:49.819 "nvme_io_md": false, 00:09:49.819 "write_zeroes": true, 00:09:49.819 "zcopy": true, 00:09:49.819 "get_zone_info": false, 00:09:49.819 "zone_management": false, 00:09:49.819 "zone_append": false, 00:09:49.819 "compare": false, 00:09:49.819 "compare_and_write": false, 00:09:49.819 "abort": true, 00:09:49.819 "seek_hole": false, 00:09:49.819 "seek_data": false, 00:09:49.819 "copy": true, 00:09:49.819 "nvme_iov_md": false 00:09:49.819 }, 00:09:49.819 "memory_domains": [ 00:09:49.819 { 00:09:49.819 "dma_device_id": "system", 00:09:49.819 "dma_device_type": 1 00:09:49.819 }, 00:09:49.819 { 00:09:49.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.819 "dma_device_type": 2 00:09:49.819 } 00:09:49.819 ], 00:09:49.819 "driver_specific": {} 00:09:49.819 }, 00:09:49.819 { 00:09:49.819 "name": "Passthru0", 00:09:49.819 "aliases": [ 00:09:49.819 "79a42913-10e9-536c-8924-55a25587c90a" 00:09:49.819 ], 00:09:49.819 "product_name": "passthru", 00:09:49.819 "block_size": 512, 00:09:49.819 "num_blocks": 16384, 00:09:49.819 "uuid": "79a42913-10e9-536c-8924-55a25587c90a", 00:09:49.819 "assigned_rate_limits": { 00:09:49.819 "rw_ios_per_sec": 0, 00:09:49.819 "rw_mbytes_per_sec": 0, 00:09:49.819 "r_mbytes_per_sec": 0, 00:09:49.819 "w_mbytes_per_sec": 0 00:09:49.819 }, 00:09:49.819 "claimed": false, 00:09:49.819 "zoned": false, 00:09:49.819 "supported_io_types": { 00:09:49.819 "read": true, 00:09:49.819 "write": true, 00:09:49.819 "unmap": true, 00:09:49.819 "flush": true, 00:09:49.819 "reset": true, 00:09:49.819 "nvme_admin": false, 00:09:49.819 "nvme_io": false, 00:09:49.819 "nvme_io_md": false, 00:09:49.819 "write_zeroes": true, 00:09:49.819 "zcopy": true, 00:09:49.819 "get_zone_info": false, 00:09:49.819 "zone_management": false, 00:09:49.819 "zone_append": false, 00:09:49.819 "compare": false, 00:09:49.819 "compare_and_write": false, 00:09:49.819 "abort": true, 00:09:49.819 "seek_hole": false, 00:09:49.819 "seek_data": false, 00:09:49.819 "copy": true, 00:09:49.819 "nvme_iov_md": false 00:09:49.819 }, 00:09:49.819 "memory_domains": [ 00:09:49.819 { 00:09:49.819 "dma_device_id": "system", 00:09:49.819 "dma_device_type": 1 00:09:49.819 }, 00:09:49.819 { 00:09:49.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.819 "dma_device_type": 2 00:09:49.819 } 00:09:49.819 ], 00:09:49.819 "driver_specific": { 00:09:49.819 "passthru": { 00:09:49.819 "name": "Passthru0", 00:09:49.819 "base_bdev_name": "Malloc0" 00:09:49.819 } 00:09:49.819 } 00:09:49.819 } 00:09:49.819 ]' 00:09:49.819 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:49.819 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:49.819 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:49.819 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.819 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:49.819 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.819 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:49.819 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.819 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:49.819 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.819 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:49.819 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.819 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:49.819 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.819 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:49.819 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:49.819 ************************************ 00:09:49.819 END TEST rpc_integrity 00:09:49.819 ************************************ 00:09:49.819 09:09:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:49.819 00:09:49.819 real 0m0.370s 00:09:49.819 user 0m0.218s 00:09:49.819 sys 0m0.050s 00:09:49.819 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.819 09:09:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:50.079 09:09:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:50.079 09:09:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:50.079 09:09:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.079 09:09:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.079 ************************************ 00:09:50.079 START TEST rpc_plugins 00:09:50.079 ************************************ 00:09:50.079 09:09:33 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:09:50.079 09:09:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:50.079 09:09:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.079 09:09:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:50.079 09:09:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.079 09:09:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:50.079 09:09:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:50.079 09:09:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.079 09:09:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:50.079 09:09:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.079 09:09:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:50.079 { 00:09:50.079 "name": "Malloc1", 00:09:50.079 "aliases": [ 00:09:50.079 "460fe149-77fa-4b4d-95eb-75d814359a9d" 00:09:50.079 ], 00:09:50.079 "product_name": "Malloc disk", 00:09:50.079 "block_size": 4096, 00:09:50.079 "num_blocks": 256, 00:09:50.079 "uuid": "460fe149-77fa-4b4d-95eb-75d814359a9d", 00:09:50.079 "assigned_rate_limits": { 00:09:50.079 "rw_ios_per_sec": 0, 00:09:50.079 "rw_mbytes_per_sec": 0, 00:09:50.079 "r_mbytes_per_sec": 0, 00:09:50.079 "w_mbytes_per_sec": 0 00:09:50.079 }, 00:09:50.079 "claimed": false, 00:09:50.079 "zoned": false, 00:09:50.079 "supported_io_types": { 00:09:50.079 "read": true, 00:09:50.079 "write": true, 00:09:50.079 "unmap": true, 00:09:50.079 "flush": true, 00:09:50.079 "reset": true, 00:09:50.079 "nvme_admin": false, 00:09:50.079 "nvme_io": false, 00:09:50.079 "nvme_io_md": false, 00:09:50.079 "write_zeroes": true, 00:09:50.079 "zcopy": true, 00:09:50.079 "get_zone_info": false, 00:09:50.079 "zone_management": false, 00:09:50.079 "zone_append": false, 00:09:50.079 "compare": false, 00:09:50.079 "compare_and_write": false, 00:09:50.079 "abort": true, 00:09:50.079 "seek_hole": false, 00:09:50.079 "seek_data": false, 00:09:50.079 "copy": true, 00:09:50.079 "nvme_iov_md": false 00:09:50.079 }, 00:09:50.079 "memory_domains": [ 00:09:50.079 { 00:09:50.079 "dma_device_id": "system", 00:09:50.079 "dma_device_type": 1 00:09:50.079 }, 00:09:50.079 { 00:09:50.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.079 "dma_device_type": 2 00:09:50.079 } 00:09:50.079 ], 00:09:50.079 "driver_specific": {} 00:09:50.079 } 00:09:50.079 ]' 00:09:50.079 09:09:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:50.079 09:09:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:50.079 09:09:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:50.079 09:09:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.079 09:09:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:50.079 09:09:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.079 09:09:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:50.079 09:09:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.079 09:09:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:50.079 09:09:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.079 09:09:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:50.079 09:09:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:50.079 ************************************ 00:09:50.079 END TEST rpc_plugins 00:09:50.079 ************************************ 00:09:50.079 09:09:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:50.079 00:09:50.079 real 0m0.172s 00:09:50.079 user 0m0.110s 00:09:50.079 sys 0m0.016s 00:09:50.079 09:09:33 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.079 09:09:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:50.338 09:09:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:50.338 09:09:34 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:50.338 09:09:34 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.338 09:09:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.338 ************************************ 00:09:50.338 START TEST rpc_trace_cmd_test 00:09:50.338 ************************************ 00:09:50.338 09:09:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:09:50.338 09:09:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:50.338 09:09:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:50.338 09:09:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.338 09:09:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.338 09:09:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.338 09:09:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:50.338 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56905", 00:09:50.338 "tpoint_group_mask": "0x8", 00:09:50.338 "iscsi_conn": { 00:09:50.338 "mask": "0x2", 00:09:50.338 "tpoint_mask": "0x0" 00:09:50.338 }, 00:09:50.338 "scsi": { 00:09:50.338 "mask": "0x4", 00:09:50.338 "tpoint_mask": "0x0" 00:09:50.338 }, 00:09:50.338 "bdev": { 00:09:50.338 "mask": "0x8", 00:09:50.338 "tpoint_mask": "0xffffffffffffffff" 00:09:50.338 }, 00:09:50.338 "nvmf_rdma": { 00:09:50.338 "mask": "0x10", 00:09:50.338 "tpoint_mask": "0x0" 00:09:50.338 }, 00:09:50.338 "nvmf_tcp": { 00:09:50.338 "mask": "0x20", 00:09:50.338 "tpoint_mask": "0x0" 00:09:50.338 }, 00:09:50.338 "ftl": { 00:09:50.338 "mask": "0x40", 00:09:50.338 "tpoint_mask": "0x0" 00:09:50.338 }, 00:09:50.338 "blobfs": { 00:09:50.338 "mask": "0x80", 00:09:50.339 "tpoint_mask": "0x0" 00:09:50.339 }, 00:09:50.339 "dsa": { 00:09:50.339 "mask": "0x200", 00:09:50.339 "tpoint_mask": "0x0" 00:09:50.339 }, 00:09:50.339 "thread": { 00:09:50.339 "mask": "0x400", 00:09:50.339 "tpoint_mask": "0x0" 00:09:50.339 }, 00:09:50.339 "nvme_pcie": { 00:09:50.339 "mask": "0x800", 00:09:50.339 "tpoint_mask": "0x0" 00:09:50.339 }, 00:09:50.339 "iaa": { 00:09:50.339 "mask": "0x1000", 00:09:50.339 "tpoint_mask": "0x0" 00:09:50.339 }, 00:09:50.339 "nvme_tcp": { 00:09:50.339 "mask": "0x2000", 00:09:50.339 "tpoint_mask": "0x0" 00:09:50.339 }, 00:09:50.339 "bdev_nvme": { 00:09:50.339 "mask": "0x4000", 00:09:50.339 "tpoint_mask": "0x0" 00:09:50.339 }, 00:09:50.339 "sock": { 00:09:50.339 "mask": "0x8000", 00:09:50.339 "tpoint_mask": "0x0" 00:09:50.339 }, 00:09:50.339 "blob": { 00:09:50.339 "mask": "0x10000", 00:09:50.339 "tpoint_mask": "0x0" 00:09:50.339 }, 00:09:50.339 "bdev_raid": { 00:09:50.339 "mask": "0x20000", 00:09:50.339 "tpoint_mask": "0x0" 00:09:50.339 }, 00:09:50.339 "scheduler": { 00:09:50.339 "mask": "0x40000", 00:09:50.339 "tpoint_mask": "0x0" 00:09:50.339 } 00:09:50.339 }' 00:09:50.339 09:09:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:50.339 09:09:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:50.339 09:09:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:50.339 09:09:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:50.339 09:09:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:50.339 09:09:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:50.339 09:09:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:50.597 09:09:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:50.597 09:09:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:50.597 ************************************ 00:09:50.597 END TEST rpc_trace_cmd_test 00:09:50.597 ************************************ 00:09:50.597 09:09:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:50.597 00:09:50.597 real 0m0.320s 00:09:50.597 user 0m0.275s 00:09:50.597 sys 0m0.030s 00:09:50.597 09:09:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.597 09:09:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.597 09:09:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:50.597 09:09:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:50.597 09:09:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:50.597 09:09:34 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:50.597 09:09:34 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.597 09:09:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.597 ************************************ 00:09:50.597 START TEST rpc_daemon_integrity 00:09:50.597 ************************************ 00:09:50.597 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:09:50.597 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:50.597 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.597 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:50.597 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.597 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:50.597 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:50.597 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:50.597 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:50.597 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.597 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:50.597 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.597 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:50.597 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:50.597 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.597 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:50.597 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.597 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:50.598 { 00:09:50.598 "name": "Malloc2", 00:09:50.598 "aliases": [ 00:09:50.598 "c6c4a3e1-fd22-4699-a8d1-1618fb8902c7" 00:09:50.598 ], 00:09:50.598 "product_name": "Malloc disk", 00:09:50.598 "block_size": 512, 00:09:50.598 "num_blocks": 16384, 00:09:50.598 "uuid": "c6c4a3e1-fd22-4699-a8d1-1618fb8902c7", 00:09:50.598 "assigned_rate_limits": { 00:09:50.598 "rw_ios_per_sec": 0, 00:09:50.598 "rw_mbytes_per_sec": 0, 00:09:50.598 "r_mbytes_per_sec": 0, 00:09:50.598 "w_mbytes_per_sec": 0 00:09:50.598 }, 00:09:50.598 "claimed": false, 00:09:50.598 "zoned": false, 00:09:50.598 "supported_io_types": { 00:09:50.598 "read": true, 00:09:50.598 "write": true, 00:09:50.598 "unmap": true, 00:09:50.598 "flush": true, 00:09:50.598 "reset": true, 00:09:50.598 "nvme_admin": false, 00:09:50.598 "nvme_io": false, 00:09:50.598 "nvme_io_md": false, 00:09:50.598 "write_zeroes": true, 00:09:50.598 "zcopy": true, 00:09:50.598 "get_zone_info": false, 00:09:50.598 "zone_management": false, 00:09:50.598 "zone_append": false, 00:09:50.598 "compare": false, 00:09:50.598 "compare_and_write": false, 00:09:50.598 "abort": true, 00:09:50.598 "seek_hole": false, 00:09:50.598 "seek_data": false, 00:09:50.598 "copy": true, 00:09:50.598 "nvme_iov_md": false 00:09:50.598 }, 00:09:50.598 "memory_domains": [ 00:09:50.598 { 00:09:50.598 "dma_device_id": "system", 00:09:50.598 "dma_device_type": 1 00:09:50.598 }, 00:09:50.598 { 00:09:50.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.598 "dma_device_type": 2 00:09:50.598 } 00:09:50.598 ], 00:09:50.598 "driver_specific": {} 00:09:50.598 } 00:09:50.598 ]' 00:09:50.598 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:50.856 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:50.856 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:50.856 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.856 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:50.856 [2024-10-15 09:09:34.557272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:50.857 [2024-10-15 09:09:34.557385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.857 [2024-10-15 09:09:34.557425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:50.857 [2024-10-15 09:09:34.557445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.857 [2024-10-15 09:09:34.561104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.857 [2024-10-15 09:09:34.561184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:50.857 Passthru0 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:50.857 { 00:09:50.857 "name": "Malloc2", 00:09:50.857 "aliases": [ 00:09:50.857 "c6c4a3e1-fd22-4699-a8d1-1618fb8902c7" 00:09:50.857 ], 00:09:50.857 "product_name": "Malloc disk", 00:09:50.857 "block_size": 512, 00:09:50.857 "num_blocks": 16384, 00:09:50.857 "uuid": "c6c4a3e1-fd22-4699-a8d1-1618fb8902c7", 00:09:50.857 "assigned_rate_limits": { 00:09:50.857 "rw_ios_per_sec": 0, 00:09:50.857 "rw_mbytes_per_sec": 0, 00:09:50.857 "r_mbytes_per_sec": 0, 00:09:50.857 "w_mbytes_per_sec": 0 00:09:50.857 }, 00:09:50.857 "claimed": true, 00:09:50.857 "claim_type": "exclusive_write", 00:09:50.857 "zoned": false, 00:09:50.857 "supported_io_types": { 00:09:50.857 "read": true, 00:09:50.857 "write": true, 00:09:50.857 "unmap": true, 00:09:50.857 "flush": true, 00:09:50.857 "reset": true, 00:09:50.857 "nvme_admin": false, 00:09:50.857 "nvme_io": false, 00:09:50.857 "nvme_io_md": false, 00:09:50.857 "write_zeroes": true, 00:09:50.857 "zcopy": true, 00:09:50.857 "get_zone_info": false, 00:09:50.857 "zone_management": false, 00:09:50.857 "zone_append": false, 00:09:50.857 "compare": false, 00:09:50.857 "compare_and_write": false, 00:09:50.857 "abort": true, 00:09:50.857 "seek_hole": false, 00:09:50.857 "seek_data": false, 00:09:50.857 "copy": true, 00:09:50.857 "nvme_iov_md": false 00:09:50.857 }, 00:09:50.857 "memory_domains": [ 00:09:50.857 { 00:09:50.857 "dma_device_id": "system", 00:09:50.857 "dma_device_type": 1 00:09:50.857 }, 00:09:50.857 { 00:09:50.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.857 "dma_device_type": 2 00:09:50.857 } 00:09:50.857 ], 00:09:50.857 "driver_specific": {} 00:09:50.857 }, 00:09:50.857 { 00:09:50.857 "name": "Passthru0", 00:09:50.857 "aliases": [ 00:09:50.857 "8e0925ad-9c99-5bff-8241-8388496d33e2" 00:09:50.857 ], 00:09:50.857 "product_name": "passthru", 00:09:50.857 "block_size": 512, 00:09:50.857 "num_blocks": 16384, 00:09:50.857 "uuid": "8e0925ad-9c99-5bff-8241-8388496d33e2", 00:09:50.857 "assigned_rate_limits": { 00:09:50.857 "rw_ios_per_sec": 0, 00:09:50.857 "rw_mbytes_per_sec": 0, 00:09:50.857 "r_mbytes_per_sec": 0, 00:09:50.857 "w_mbytes_per_sec": 0 00:09:50.857 }, 00:09:50.857 "claimed": false, 00:09:50.857 "zoned": false, 00:09:50.857 "supported_io_types": { 00:09:50.857 "read": true, 00:09:50.857 "write": true, 00:09:50.857 "unmap": true, 00:09:50.857 "flush": true, 00:09:50.857 "reset": true, 00:09:50.857 "nvme_admin": false, 00:09:50.857 "nvme_io": false, 00:09:50.857 "nvme_io_md": false, 00:09:50.857 "write_zeroes": true, 00:09:50.857 "zcopy": true, 00:09:50.857 "get_zone_info": false, 00:09:50.857 "zone_management": false, 00:09:50.857 "zone_append": false, 00:09:50.857 "compare": false, 00:09:50.857 "compare_and_write": false, 00:09:50.857 "abort": true, 00:09:50.857 "seek_hole": false, 00:09:50.857 "seek_data": false, 00:09:50.857 "copy": true, 00:09:50.857 "nvme_iov_md": false 00:09:50.857 }, 00:09:50.857 "memory_domains": [ 00:09:50.857 { 00:09:50.857 "dma_device_id": "system", 00:09:50.857 "dma_device_type": 1 00:09:50.857 }, 00:09:50.857 { 00:09:50.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.857 "dma_device_type": 2 00:09:50.857 } 00:09:50.857 ], 00:09:50.857 "driver_specific": { 00:09:50.857 "passthru": { 00:09:50.857 "name": "Passthru0", 00:09:50.857 "base_bdev_name": "Malloc2" 00:09:50.857 } 00:09:50.857 } 00:09:50.857 } 00:09:50.857 ]' 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:50.857 ************************************ 00:09:50.857 END TEST rpc_daemon_integrity 00:09:50.857 ************************************ 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:50.857 00:09:50.857 real 0m0.343s 00:09:50.857 user 0m0.204s 00:09:50.857 sys 0m0.038s 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.857 09:09:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:51.122 09:09:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:51.122 09:09:34 rpc -- rpc/rpc.sh@84 -- # killprocess 56905 00:09:51.122 09:09:34 rpc -- common/autotest_common.sh@950 -- # '[' -z 56905 ']' 00:09:51.122 09:09:34 rpc -- common/autotest_common.sh@954 -- # kill -0 56905 00:09:51.122 09:09:34 rpc -- common/autotest_common.sh@955 -- # uname 00:09:51.122 09:09:34 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:51.122 09:09:34 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56905 00:09:51.122 killing process with pid 56905 00:09:51.122 09:09:34 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:51.122 09:09:34 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:51.122 09:09:34 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56905' 00:09:51.122 09:09:34 rpc -- common/autotest_common.sh@969 -- # kill 56905 00:09:51.122 09:09:34 rpc -- common/autotest_common.sh@974 -- # wait 56905 00:09:53.666 00:09:53.666 real 0m5.632s 00:09:53.666 user 0m6.305s 00:09:53.666 sys 0m1.023s 00:09:53.666 ************************************ 00:09:53.666 END TEST rpc 00:09:53.666 ************************************ 00:09:53.666 09:09:37 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.666 09:09:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.666 09:09:37 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:53.666 09:09:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:53.666 09:09:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.666 09:09:37 -- common/autotest_common.sh@10 -- # set +x 00:09:53.666 ************************************ 00:09:53.666 START TEST skip_rpc 00:09:53.666 ************************************ 00:09:53.666 09:09:37 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:53.666 * Looking for test storage... 00:09:53.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:53.666 09:09:37 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:53.666 09:09:37 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:53.666 09:09:37 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:53.666 09:09:37 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.667 09:09:37 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:53.667 09:09:37 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.667 09:09:37 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:53.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.667 --rc genhtml_branch_coverage=1 00:09:53.667 --rc genhtml_function_coverage=1 00:09:53.667 --rc genhtml_legend=1 00:09:53.667 --rc geninfo_all_blocks=1 00:09:53.667 --rc geninfo_unexecuted_blocks=1 00:09:53.667 00:09:53.667 ' 00:09:53.667 09:09:37 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:53.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.667 --rc genhtml_branch_coverage=1 00:09:53.667 --rc genhtml_function_coverage=1 00:09:53.667 --rc genhtml_legend=1 00:09:53.667 --rc geninfo_all_blocks=1 00:09:53.667 --rc geninfo_unexecuted_blocks=1 00:09:53.667 00:09:53.667 ' 00:09:53.667 09:09:37 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:53.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.667 --rc genhtml_branch_coverage=1 00:09:53.667 --rc genhtml_function_coverage=1 00:09:53.667 --rc genhtml_legend=1 00:09:53.667 --rc geninfo_all_blocks=1 00:09:53.667 --rc geninfo_unexecuted_blocks=1 00:09:53.667 00:09:53.667 ' 00:09:53.667 09:09:37 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:53.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.667 --rc genhtml_branch_coverage=1 00:09:53.667 --rc genhtml_function_coverage=1 00:09:53.667 --rc genhtml_legend=1 00:09:53.667 --rc geninfo_all_blocks=1 00:09:53.667 --rc geninfo_unexecuted_blocks=1 00:09:53.667 00:09:53.667 ' 00:09:53.667 09:09:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:53.667 09:09:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:53.667 09:09:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:53.667 09:09:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:53.667 09:09:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.667 09:09:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.667 ************************************ 00:09:53.667 START TEST skip_rpc 00:09:53.667 ************************************ 00:09:53.667 09:09:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:09:53.667 09:09:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57140 00:09:53.667 09:09:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:53.667 09:09:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:53.667 09:09:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:53.925 [2024-10-15 09:09:37.741059] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:09:53.925 [2024-10-15 09:09:37.741680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57140 ] 00:09:54.187 [2024-10-15 09:09:37.933346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.187 [2024-10-15 09:09:38.110415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57140 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57140 ']' 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57140 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57140 00:09:59.458 killing process with pid 57140 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57140' 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57140 00:09:59.458 09:09:42 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57140 00:10:01.367 00:10:01.367 real 0m7.612s 00:10:01.367 user 0m6.910s 00:10:01.367 sys 0m0.586s 00:10:01.367 ************************************ 00:10:01.367 END TEST skip_rpc 00:10:01.367 ************************************ 00:10:01.367 09:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.367 09:09:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.367 09:09:45 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:01.367 09:09:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:01.367 09:09:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.367 09:09:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.367 ************************************ 00:10:01.367 START TEST skip_rpc_with_json 00:10:01.367 ************************************ 00:10:01.367 09:09:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:10:01.367 09:09:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:01.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.367 09:09:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57249 00:10:01.367 09:09:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:01.367 09:09:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57249 00:10:01.367 09:09:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:01.367 09:09:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57249 ']' 00:10:01.367 09:09:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.367 09:09:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.367 09:09:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.367 09:09:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.367 09:09:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:01.626 [2024-10-15 09:09:45.375333] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:10:01.626 [2024-10-15 09:09:45.376034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57249 ] 00:10:01.626 [2024-10-15 09:09:45.552100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.885 [2024-10-15 09:09:45.700018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.822 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.822 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:10:02.822 09:09:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:02.822 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.822 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:02.822 [2024-10-15 09:09:46.685723] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:02.822 request: 00:10:02.822 { 00:10:02.822 "trtype": "tcp", 00:10:02.822 "method": "nvmf_get_transports", 00:10:02.822 "req_id": 1 00:10:02.822 } 00:10:02.822 Got JSON-RPC error response 00:10:02.822 response: 00:10:02.822 { 00:10:02.822 "code": -19, 00:10:02.822 "message": "No such device" 00:10:02.822 } 00:10:02.822 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:02.822 09:09:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:02.822 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.822 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:02.822 [2024-10-15 09:09:46.697870] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.822 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.822 09:09:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:02.822 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.822 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:03.081 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.081 09:09:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:03.081 { 00:10:03.081 "subsystems": [ 00:10:03.081 { 00:10:03.081 "subsystem": "fsdev", 00:10:03.081 "config": [ 00:10:03.081 { 00:10:03.081 "method": "fsdev_set_opts", 00:10:03.081 "params": { 00:10:03.081 "fsdev_io_pool_size": 65535, 00:10:03.081 "fsdev_io_cache_size": 256 00:10:03.081 } 00:10:03.081 } 00:10:03.081 ] 00:10:03.081 }, 00:10:03.081 { 00:10:03.081 "subsystem": "keyring", 00:10:03.081 "config": [] 00:10:03.081 }, 00:10:03.081 { 00:10:03.081 "subsystem": "iobuf", 00:10:03.081 "config": [ 00:10:03.081 { 00:10:03.081 "method": "iobuf_set_options", 00:10:03.081 "params": { 00:10:03.081 "small_pool_count": 8192, 00:10:03.081 "large_pool_count": 1024, 00:10:03.081 "small_bufsize": 8192, 00:10:03.081 "large_bufsize": 135168 00:10:03.081 } 00:10:03.081 } 00:10:03.081 ] 00:10:03.081 }, 00:10:03.081 { 00:10:03.081 "subsystem": "sock", 00:10:03.081 "config": [ 00:10:03.081 { 00:10:03.081 "method": "sock_set_default_impl", 00:10:03.081 "params": { 00:10:03.081 "impl_name": "posix" 00:10:03.081 } 00:10:03.081 }, 00:10:03.081 { 00:10:03.081 "method": "sock_impl_set_options", 00:10:03.081 "params": { 00:10:03.081 "impl_name": "ssl", 00:10:03.081 "recv_buf_size": 4096, 00:10:03.081 "send_buf_size": 4096, 00:10:03.081 "enable_recv_pipe": true, 00:10:03.081 "enable_quickack": false, 00:10:03.081 "enable_placement_id": 0, 00:10:03.081 "enable_zerocopy_send_server": true, 00:10:03.081 "enable_zerocopy_send_client": false, 00:10:03.081 "zerocopy_threshold": 0, 00:10:03.081 "tls_version": 0, 00:10:03.081 "enable_ktls": false 00:10:03.081 } 00:10:03.081 }, 00:10:03.081 { 00:10:03.081 "method": "sock_impl_set_options", 00:10:03.081 "params": { 00:10:03.081 "impl_name": "posix", 00:10:03.081 "recv_buf_size": 2097152, 00:10:03.081 "send_buf_size": 2097152, 00:10:03.081 "enable_recv_pipe": true, 00:10:03.081 "enable_quickack": false, 00:10:03.081 "enable_placement_id": 0, 00:10:03.081 "enable_zerocopy_send_server": true, 00:10:03.081 "enable_zerocopy_send_client": false, 00:10:03.081 "zerocopy_threshold": 0, 00:10:03.081 "tls_version": 0, 00:10:03.081 "enable_ktls": false 00:10:03.081 } 00:10:03.081 } 00:10:03.081 ] 00:10:03.081 }, 00:10:03.081 { 00:10:03.081 "subsystem": "vmd", 00:10:03.081 "config": [] 00:10:03.081 }, 00:10:03.081 { 00:10:03.081 "subsystem": "accel", 00:10:03.081 "config": [ 00:10:03.081 { 00:10:03.081 "method": "accel_set_options", 00:10:03.081 "params": { 00:10:03.081 "small_cache_size": 128, 00:10:03.081 "large_cache_size": 16, 00:10:03.081 "task_count": 2048, 00:10:03.081 "sequence_count": 2048, 00:10:03.081 "buf_count": 2048 00:10:03.081 } 00:10:03.081 } 00:10:03.081 ] 00:10:03.081 }, 00:10:03.081 { 00:10:03.081 "subsystem": "bdev", 00:10:03.081 "config": [ 00:10:03.081 { 00:10:03.081 "method": "bdev_set_options", 00:10:03.081 "params": { 00:10:03.081 "bdev_io_pool_size": 65535, 00:10:03.081 "bdev_io_cache_size": 256, 00:10:03.081 "bdev_auto_examine": true, 00:10:03.081 "iobuf_small_cache_size": 128, 00:10:03.081 "iobuf_large_cache_size": 16 00:10:03.081 } 00:10:03.081 }, 00:10:03.081 { 00:10:03.081 "method": "bdev_raid_set_options", 00:10:03.081 "params": { 00:10:03.081 "process_window_size_kb": 1024, 00:10:03.081 "process_max_bandwidth_mb_sec": 0 00:10:03.081 } 00:10:03.081 }, 00:10:03.081 { 00:10:03.081 "method": "bdev_iscsi_set_options", 00:10:03.081 "params": { 00:10:03.081 "timeout_sec": 30 00:10:03.081 } 00:10:03.081 }, 00:10:03.081 { 00:10:03.081 "method": "bdev_nvme_set_options", 00:10:03.081 "params": { 00:10:03.081 "action_on_timeout": "none", 00:10:03.081 "timeout_us": 0, 00:10:03.081 "timeout_admin_us": 0, 00:10:03.081 "keep_alive_timeout_ms": 10000, 00:10:03.081 "arbitration_burst": 0, 00:10:03.081 "low_priority_weight": 0, 00:10:03.081 "medium_priority_weight": 0, 00:10:03.081 "high_priority_weight": 0, 00:10:03.081 "nvme_adminq_poll_period_us": 10000, 00:10:03.081 "nvme_ioq_poll_period_us": 0, 00:10:03.081 "io_queue_requests": 0, 00:10:03.081 "delay_cmd_submit": true, 00:10:03.081 "transport_retry_count": 4, 00:10:03.081 "bdev_retry_count": 3, 00:10:03.082 "transport_ack_timeout": 0, 00:10:03.082 "ctrlr_loss_timeout_sec": 0, 00:10:03.082 "reconnect_delay_sec": 0, 00:10:03.082 "fast_io_fail_timeout_sec": 0, 00:10:03.082 "disable_auto_failback": false, 00:10:03.082 "generate_uuids": false, 00:10:03.082 "transport_tos": 0, 00:10:03.082 "nvme_error_stat": false, 00:10:03.082 "rdma_srq_size": 0, 00:10:03.082 "io_path_stat": false, 00:10:03.082 "allow_accel_sequence": false, 00:10:03.082 "rdma_max_cq_size": 0, 00:10:03.082 "rdma_cm_event_timeout_ms": 0, 00:10:03.082 "dhchap_digests": [ 00:10:03.082 "sha256", 00:10:03.082 "sha384", 00:10:03.082 "sha512" 00:10:03.082 ], 00:10:03.082 "dhchap_dhgroups": [ 00:10:03.082 "null", 00:10:03.082 "ffdhe2048", 00:10:03.082 "ffdhe3072", 00:10:03.082 "ffdhe4096", 00:10:03.082 "ffdhe6144", 00:10:03.082 "ffdhe8192" 00:10:03.082 ] 00:10:03.082 } 00:10:03.082 }, 00:10:03.082 { 00:10:03.082 "method": "bdev_nvme_set_hotplug", 00:10:03.082 "params": { 00:10:03.082 "period_us": 100000, 00:10:03.082 "enable": false 00:10:03.082 } 00:10:03.082 }, 00:10:03.082 { 00:10:03.082 "method": "bdev_wait_for_examine" 00:10:03.082 } 00:10:03.082 ] 00:10:03.082 }, 00:10:03.082 { 00:10:03.082 "subsystem": "scsi", 00:10:03.082 "config": null 00:10:03.082 }, 00:10:03.082 { 00:10:03.082 "subsystem": "scheduler", 00:10:03.082 "config": [ 00:10:03.082 { 00:10:03.082 "method": "framework_set_scheduler", 00:10:03.082 "params": { 00:10:03.082 "name": "static" 00:10:03.082 } 00:10:03.082 } 00:10:03.082 ] 00:10:03.082 }, 00:10:03.082 { 00:10:03.082 "subsystem": "vhost_scsi", 00:10:03.082 "config": [] 00:10:03.082 }, 00:10:03.082 { 00:10:03.082 "subsystem": "vhost_blk", 00:10:03.082 "config": [] 00:10:03.082 }, 00:10:03.082 { 00:10:03.082 "subsystem": "ublk", 00:10:03.082 "config": [] 00:10:03.082 }, 00:10:03.082 { 00:10:03.082 "subsystem": "nbd", 00:10:03.082 "config": [] 00:10:03.082 }, 00:10:03.082 { 00:10:03.082 "subsystem": "nvmf", 00:10:03.082 "config": [ 00:10:03.082 { 00:10:03.082 "method": "nvmf_set_config", 00:10:03.082 "params": { 00:10:03.082 "discovery_filter": "match_any", 00:10:03.082 "admin_cmd_passthru": { 00:10:03.082 "identify_ctrlr": false 00:10:03.082 }, 00:10:03.082 "dhchap_digests": [ 00:10:03.082 "sha256", 00:10:03.082 "sha384", 00:10:03.082 "sha512" 00:10:03.082 ], 00:10:03.082 "dhchap_dhgroups": [ 00:10:03.082 "null", 00:10:03.082 "ffdhe2048", 00:10:03.082 "ffdhe3072", 00:10:03.082 "ffdhe4096", 00:10:03.082 "ffdhe6144", 00:10:03.082 "ffdhe8192" 00:10:03.082 ] 00:10:03.082 } 00:10:03.082 }, 00:10:03.082 { 00:10:03.082 "method": "nvmf_set_max_subsystems", 00:10:03.082 "params": { 00:10:03.082 "max_subsystems": 1024 00:10:03.082 } 00:10:03.082 }, 00:10:03.082 { 00:10:03.082 "method": "nvmf_set_crdt", 00:10:03.082 "params": { 00:10:03.082 "crdt1": 0, 00:10:03.082 "crdt2": 0, 00:10:03.082 "crdt3": 0 00:10:03.082 } 00:10:03.082 }, 00:10:03.082 { 00:10:03.082 "method": "nvmf_create_transport", 00:10:03.082 "params": { 00:10:03.082 "trtype": "TCP", 00:10:03.082 "max_queue_depth": 128, 00:10:03.082 "max_io_qpairs_per_ctrlr": 127, 00:10:03.082 "in_capsule_data_size": 4096, 00:10:03.082 "max_io_size": 131072, 00:10:03.082 "io_unit_size": 131072, 00:10:03.082 "max_aq_depth": 128, 00:10:03.082 "num_shared_buffers": 511, 00:10:03.082 "buf_cache_size": 4294967295, 00:10:03.082 "dif_insert_or_strip": false, 00:10:03.082 "zcopy": false, 00:10:03.082 "c2h_success": true, 00:10:03.082 "sock_priority": 0, 00:10:03.082 "abort_timeout_sec": 1, 00:10:03.082 "ack_timeout": 0, 00:10:03.082 "data_wr_pool_size": 0 00:10:03.082 } 00:10:03.082 } 00:10:03.082 ] 00:10:03.082 }, 00:10:03.082 { 00:10:03.082 "subsystem": "iscsi", 00:10:03.082 "config": [ 00:10:03.082 { 00:10:03.082 "method": "iscsi_set_options", 00:10:03.082 "params": { 00:10:03.082 "node_base": "iqn.2016-06.io.spdk", 00:10:03.082 "max_sessions": 128, 00:10:03.082 "max_connections_per_session": 2, 00:10:03.082 "max_queue_depth": 64, 00:10:03.082 "default_time2wait": 2, 00:10:03.082 "default_time2retain": 20, 00:10:03.082 "first_burst_length": 8192, 00:10:03.082 "immediate_data": true, 00:10:03.082 "allow_duplicated_isid": false, 00:10:03.082 "error_recovery_level": 0, 00:10:03.082 "nop_timeout": 60, 00:10:03.082 "nop_in_interval": 30, 00:10:03.082 "disable_chap": false, 00:10:03.082 "require_chap": false, 00:10:03.082 "mutual_chap": false, 00:10:03.082 "chap_group": 0, 00:10:03.082 "max_large_datain_per_connection": 64, 00:10:03.082 "max_r2t_per_connection": 4, 00:10:03.082 "pdu_pool_size": 36864, 00:10:03.082 "immediate_data_pool_size": 16384, 00:10:03.082 "data_out_pool_size": 2048 00:10:03.082 } 00:10:03.082 } 00:10:03.082 ] 00:10:03.082 } 00:10:03.082 ] 00:10:03.082 } 00:10:03.082 09:09:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:03.082 09:09:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57249 00:10:03.082 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57249 ']' 00:10:03.082 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57249 00:10:03.082 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:10:03.082 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.082 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57249 00:10:03.082 killing process with pid 57249 00:10:03.082 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.082 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.082 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57249' 00:10:03.082 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57249 00:10:03.082 09:09:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57249 00:10:05.615 09:09:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57305 00:10:05.615 09:09:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:05.615 09:09:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:10.885 09:09:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57305 00:10:10.885 09:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57305 ']' 00:10:10.885 09:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57305 00:10:10.885 09:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:10:10.885 09:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.885 09:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57305 00:10:10.885 killing process with pid 57305 00:10:10.885 09:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.885 09:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.885 09:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57305' 00:10:10.885 09:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57305 00:10:10.885 09:09:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57305 00:10:13.419 09:09:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:13.419 09:09:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:13.419 00:10:13.419 real 0m11.763s 00:10:13.419 user 0m11.005s 00:10:13.419 sys 0m1.235s 00:10:13.419 09:09:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.419 ************************************ 00:10:13.419 END TEST skip_rpc_with_json 00:10:13.419 ************************************ 00:10:13.419 09:09:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:13.419 09:09:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:13.419 09:09:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:13.419 09:09:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.419 09:09:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.419 ************************************ 00:10:13.419 START TEST skip_rpc_with_delay 00:10:13.419 ************************************ 00:10:13.419 09:09:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:10:13.419 09:09:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:13.419 09:09:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:10:13.419 09:09:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:13.419 09:09:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:13.419 09:09:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:13.419 09:09:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:13.419 09:09:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:13.419 09:09:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:13.419 09:09:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:13.420 09:09:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:13.420 09:09:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:13.420 09:09:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:13.420 [2024-10-15 09:09:57.198985] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:13.420 09:09:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:10:13.420 09:09:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:13.420 09:09:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:13.420 09:09:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:13.420 00:10:13.420 real 0m0.213s 00:10:13.420 user 0m0.118s 00:10:13.420 sys 0m0.093s 00:10:13.420 09:09:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.420 ************************************ 00:10:13.420 END TEST skip_rpc_with_delay 00:10:13.420 ************************************ 00:10:13.420 09:09:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:13.420 09:09:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:13.420 09:09:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:13.420 09:09:57 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:13.420 09:09:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:13.420 09:09:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.420 09:09:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.420 ************************************ 00:10:13.420 START TEST exit_on_failed_rpc_init 00:10:13.420 ************************************ 00:10:13.420 09:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:10:13.420 09:09:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57439 00:10:13.420 09:09:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:13.420 09:09:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57439 00:10:13.420 09:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57439 ']' 00:10:13.420 09:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.420 09:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.420 09:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.420 09:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.420 09:09:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:13.678 [2024-10-15 09:09:57.469731] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:10:13.679 [2024-10-15 09:09:57.469940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57439 ] 00:10:13.938 [2024-10-15 09:09:57.648577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.938 [2024-10-15 09:09:57.815944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.331 09:09:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:15.331 09:09:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:10:15.331 09:09:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:15.331 09:09:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:15.331 09:09:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:10:15.331 09:09:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:15.331 09:09:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:15.331 09:09:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.331 09:09:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:15.331 09:09:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.331 09:09:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:15.331 09:09:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.331 09:09:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:15.331 09:09:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:15.331 09:09:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:15.331 [2024-10-15 09:09:59.009545] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:10:15.331 [2024-10-15 09:09:59.009753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57468 ] 00:10:15.331 [2024-10-15 09:09:59.194312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.615 [2024-10-15 09:09:59.379050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.615 [2024-10-15 09:09:59.379276] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:15.615 [2024-10-15 09:09:59.379300] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:15.616 [2024-10-15 09:09:59.379317] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:15.874 09:09:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:10:15.874 09:09:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:15.874 09:09:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:10:15.874 09:09:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:10:15.874 09:09:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:10:15.874 09:09:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:15.874 09:09:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:15.874 09:09:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57439 00:10:15.874 09:09:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57439 ']' 00:10:15.874 09:09:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57439 00:10:15.874 09:09:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:10:15.874 09:09:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.874 09:09:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57439 00:10:15.874 09:09:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:15.874 09:09:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:15.874 killing process with pid 57439 00:10:15.874 09:09:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57439' 00:10:15.874 09:09:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57439 00:10:15.874 09:09:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57439 00:10:18.405 00:10:18.405 real 0m4.986s 00:10:18.405 user 0m5.426s 00:10:18.405 sys 0m0.880s 00:10:18.405 09:10:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.405 09:10:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:18.405 ************************************ 00:10:18.405 END TEST exit_on_failed_rpc_init 00:10:18.405 ************************************ 00:10:18.664 09:10:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:18.664 00:10:18.664 real 0m24.997s 00:10:18.664 user 0m23.649s 00:10:18.664 sys 0m3.021s 00:10:18.664 09:10:02 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.664 ************************************ 00:10:18.664 END TEST skip_rpc 00:10:18.664 09:10:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.664 ************************************ 00:10:18.664 09:10:02 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:18.664 09:10:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:18.665 09:10:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.665 09:10:02 -- common/autotest_common.sh@10 -- # set +x 00:10:18.665 ************************************ 00:10:18.665 START TEST rpc_client 00:10:18.665 ************************************ 00:10:18.665 09:10:02 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:18.665 * Looking for test storage... 00:10:18.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:18.665 09:10:02 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:18.665 09:10:02 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:10:18.665 09:10:02 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:18.924 09:10:02 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.924 09:10:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:10:18.924 09:10:02 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.924 09:10:02 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:18.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.924 --rc genhtml_branch_coverage=1 00:10:18.924 --rc genhtml_function_coverage=1 00:10:18.924 --rc genhtml_legend=1 00:10:18.924 --rc geninfo_all_blocks=1 00:10:18.924 --rc geninfo_unexecuted_blocks=1 00:10:18.924 00:10:18.924 ' 00:10:18.924 09:10:02 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:18.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.924 --rc genhtml_branch_coverage=1 00:10:18.924 --rc genhtml_function_coverage=1 00:10:18.924 --rc genhtml_legend=1 00:10:18.924 --rc geninfo_all_blocks=1 00:10:18.924 --rc geninfo_unexecuted_blocks=1 00:10:18.924 00:10:18.924 ' 00:10:18.924 09:10:02 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:18.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.924 --rc genhtml_branch_coverage=1 00:10:18.924 --rc genhtml_function_coverage=1 00:10:18.924 --rc genhtml_legend=1 00:10:18.924 --rc geninfo_all_blocks=1 00:10:18.924 --rc geninfo_unexecuted_blocks=1 00:10:18.924 00:10:18.924 ' 00:10:18.924 09:10:02 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:18.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.924 --rc genhtml_branch_coverage=1 00:10:18.924 --rc genhtml_function_coverage=1 00:10:18.924 --rc genhtml_legend=1 00:10:18.924 --rc geninfo_all_blocks=1 00:10:18.924 --rc geninfo_unexecuted_blocks=1 00:10:18.924 00:10:18.924 ' 00:10:18.924 09:10:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:18.924 OK 00:10:18.924 09:10:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:18.924 00:10:18.924 real 0m0.267s 00:10:18.924 user 0m0.163s 00:10:18.925 sys 0m0.117s 00:10:18.925 09:10:02 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.925 ************************************ 00:10:18.925 END TEST rpc_client 00:10:18.925 ************************************ 00:10:18.925 09:10:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:18.925 09:10:02 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:18.925 09:10:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:18.925 09:10:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.925 09:10:02 -- common/autotest_common.sh@10 -- # set +x 00:10:18.925 ************************************ 00:10:18.925 START TEST json_config 00:10:18.925 ************************************ 00:10:18.925 09:10:02 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:18.925 09:10:02 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:18.925 09:10:02 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:10:18.925 09:10:02 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:19.184 09:10:02 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:19.184 09:10:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.184 09:10:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.184 09:10:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.184 09:10:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.184 09:10:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.184 09:10:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.184 09:10:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.184 09:10:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.184 09:10:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.184 09:10:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.184 09:10:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.184 09:10:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:10:19.184 09:10:02 json_config -- scripts/common.sh@345 -- # : 1 00:10:19.184 09:10:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.184 09:10:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.184 09:10:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:10:19.184 09:10:02 json_config -- scripts/common.sh@353 -- # local d=1 00:10:19.184 09:10:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.184 09:10:02 json_config -- scripts/common.sh@355 -- # echo 1 00:10:19.184 09:10:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.184 09:10:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:10:19.184 09:10:02 json_config -- scripts/common.sh@353 -- # local d=2 00:10:19.184 09:10:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.184 09:10:02 json_config -- scripts/common.sh@355 -- # echo 2 00:10:19.184 09:10:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.184 09:10:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.184 09:10:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.184 09:10:02 json_config -- scripts/common.sh@368 -- # return 0 00:10:19.184 09:10:02 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.184 09:10:02 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:19.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.184 --rc genhtml_branch_coverage=1 00:10:19.184 --rc genhtml_function_coverage=1 00:10:19.184 --rc genhtml_legend=1 00:10:19.184 --rc geninfo_all_blocks=1 00:10:19.184 --rc geninfo_unexecuted_blocks=1 00:10:19.184 00:10:19.184 ' 00:10:19.184 09:10:02 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:19.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.184 --rc genhtml_branch_coverage=1 00:10:19.184 --rc genhtml_function_coverage=1 00:10:19.184 --rc genhtml_legend=1 00:10:19.184 --rc geninfo_all_blocks=1 00:10:19.184 --rc geninfo_unexecuted_blocks=1 00:10:19.184 00:10:19.184 ' 00:10:19.184 09:10:02 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:19.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.184 --rc genhtml_branch_coverage=1 00:10:19.184 --rc genhtml_function_coverage=1 00:10:19.184 --rc genhtml_legend=1 00:10:19.184 --rc geninfo_all_blocks=1 00:10:19.184 --rc geninfo_unexecuted_blocks=1 00:10:19.184 00:10:19.184 ' 00:10:19.184 09:10:02 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:19.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.184 --rc genhtml_branch_coverage=1 00:10:19.184 --rc genhtml_function_coverage=1 00:10:19.184 --rc genhtml_legend=1 00:10:19.184 --rc geninfo_all_blocks=1 00:10:19.184 --rc geninfo_unexecuted_blocks=1 00:10:19.185 00:10:19.185 ' 00:10:19.185 09:10:02 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fe6bb936-df87-4d06-be6f-50f757130ba3 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=fe6bb936-df87-4d06-be6f-50f757130ba3 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:19.185 09:10:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.185 09:10:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.185 09:10:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.185 09:10:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.185 09:10:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.185 09:10:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.185 09:10:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.185 09:10:02 json_config -- paths/export.sh@5 -- # export PATH 00:10:19.185 09:10:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@51 -- # : 0 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.185 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.185 09:10:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.185 09:10:02 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:19.185 09:10:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:19.185 09:10:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:19.185 09:10:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:19.185 09:10:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:19.185 WARNING: No tests are enabled so not running JSON configuration tests 00:10:19.185 09:10:02 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:10:19.185 09:10:02 json_config -- json_config/json_config.sh@28 -- # exit 0 00:10:19.185 00:10:19.185 real 0m0.211s 00:10:19.185 user 0m0.146s 00:10:19.185 sys 0m0.072s 00:10:19.185 09:10:02 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.185 ************************************ 00:10:19.185 END TEST json_config 00:10:19.185 ************************************ 00:10:19.185 09:10:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:19.185 09:10:02 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:19.185 09:10:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:19.185 09:10:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.185 09:10:02 -- common/autotest_common.sh@10 -- # set +x 00:10:19.185 ************************************ 00:10:19.185 START TEST json_config_extra_key 00:10:19.185 ************************************ 00:10:19.185 09:10:02 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:19.185 09:10:03 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:19.185 09:10:03 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:10:19.185 09:10:03 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:19.444 09:10:03 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:19.444 09:10:03 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.444 09:10:03 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:19.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.444 --rc genhtml_branch_coverage=1 00:10:19.444 --rc genhtml_function_coverage=1 00:10:19.444 --rc genhtml_legend=1 00:10:19.444 --rc geninfo_all_blocks=1 00:10:19.444 --rc geninfo_unexecuted_blocks=1 00:10:19.444 00:10:19.444 ' 00:10:19.444 09:10:03 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:19.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.444 --rc genhtml_branch_coverage=1 00:10:19.444 --rc genhtml_function_coverage=1 00:10:19.444 --rc genhtml_legend=1 00:10:19.444 --rc geninfo_all_blocks=1 00:10:19.444 --rc geninfo_unexecuted_blocks=1 00:10:19.444 00:10:19.444 ' 00:10:19.444 09:10:03 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:19.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.444 --rc genhtml_branch_coverage=1 00:10:19.444 --rc genhtml_function_coverage=1 00:10:19.444 --rc genhtml_legend=1 00:10:19.444 --rc geninfo_all_blocks=1 00:10:19.444 --rc geninfo_unexecuted_blocks=1 00:10:19.444 00:10:19.444 ' 00:10:19.444 09:10:03 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:19.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.444 --rc genhtml_branch_coverage=1 00:10:19.444 --rc genhtml_function_coverage=1 00:10:19.444 --rc genhtml_legend=1 00:10:19.444 --rc geninfo_all_blocks=1 00:10:19.444 --rc geninfo_unexecuted_blocks=1 00:10:19.444 00:10:19.444 ' 00:10:19.444 09:10:03 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:19.444 09:10:03 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:19.444 09:10:03 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.444 09:10:03 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.444 09:10:03 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.444 09:10:03 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.444 09:10:03 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.444 09:10:03 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.444 09:10:03 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.444 09:10:03 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.444 09:10:03 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.444 09:10:03 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.444 09:10:03 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fe6bb936-df87-4d06-be6f-50f757130ba3 00:10:19.444 09:10:03 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=fe6bb936-df87-4d06-be6f-50f757130ba3 00:10:19.444 09:10:03 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.444 09:10:03 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.444 09:10:03 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:19.444 09:10:03 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.444 09:10:03 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.444 09:10:03 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.444 09:10:03 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.445 09:10:03 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.445 09:10:03 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.445 09:10:03 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:19.445 09:10:03 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.445 09:10:03 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:10:19.445 09:10:03 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.445 09:10:03 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.445 09:10:03 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.445 09:10:03 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.445 09:10:03 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.445 09:10:03 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.445 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.445 09:10:03 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.445 09:10:03 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.445 09:10:03 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.445 09:10:03 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:19.445 09:10:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:19.445 09:10:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:19.445 09:10:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:19.445 09:10:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:19.445 09:10:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:19.445 09:10:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:19.445 09:10:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:19.445 09:10:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:19.445 09:10:03 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:19.445 INFO: launching applications... 00:10:19.445 09:10:03 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:19.445 09:10:03 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:19.445 09:10:03 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:19.445 09:10:03 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:19.445 09:10:03 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:19.445 09:10:03 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:19.445 09:10:03 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:19.445 09:10:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:19.445 09:10:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:19.445 09:10:03 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57678 00:10:19.445 09:10:03 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:19.445 Waiting for target to run... 00:10:19.445 09:10:03 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:19.445 09:10:03 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57678 /var/tmp/spdk_tgt.sock 00:10:19.445 09:10:03 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57678 ']' 00:10:19.445 09:10:03 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:19.445 09:10:03 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:19.445 09:10:03 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:19.445 09:10:03 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.445 09:10:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:19.445 [2024-10-15 09:10:03.322851] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:10:19.445 [2024-10-15 09:10:03.323705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57678 ] 00:10:20.012 [2024-10-15 09:10:03.929287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.271 [2024-10-15 09:10:04.084516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.247 09:10:04 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.247 09:10:04 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:10:21.247 00:10:21.247 09:10:04 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:21.247 INFO: shutting down applications... 00:10:21.247 09:10:04 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:21.247 09:10:04 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:21.247 09:10:04 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:21.247 09:10:04 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:21.247 09:10:04 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57678 ]] 00:10:21.247 09:10:04 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57678 00:10:21.247 09:10:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:21.247 09:10:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:21.247 09:10:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57678 00:10:21.247 09:10:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:21.530 09:10:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:21.530 09:10:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:21.530 09:10:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57678 00:10:21.530 09:10:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:22.098 09:10:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:22.098 09:10:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:22.098 09:10:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57678 00:10:22.098 09:10:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:22.665 09:10:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:22.665 09:10:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:22.665 09:10:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57678 00:10:22.665 09:10:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:23.232 09:10:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:23.232 09:10:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:23.232 09:10:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57678 00:10:23.233 09:10:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:23.491 09:10:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:23.491 09:10:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:23.491 09:10:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57678 00:10:23.491 09:10:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:24.058 09:10:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:24.058 09:10:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:24.058 09:10:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57678 00:10:24.058 09:10:07 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:24.058 09:10:07 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:24.058 09:10:07 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:24.058 SPDK target shutdown done 00:10:24.059 09:10:07 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:24.059 Success 00:10:24.059 09:10:07 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:24.059 00:10:24.059 real 0m4.877s 00:10:24.059 user 0m4.352s 00:10:24.059 sys 0m0.803s 00:10:24.059 09:10:07 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.059 09:10:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:24.059 ************************************ 00:10:24.059 END TEST json_config_extra_key 00:10:24.059 ************************************ 00:10:24.059 09:10:07 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:24.059 09:10:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:24.059 09:10:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.059 09:10:07 -- common/autotest_common.sh@10 -- # set +x 00:10:24.059 ************************************ 00:10:24.059 START TEST alias_rpc 00:10:24.059 ************************************ 00:10:24.059 09:10:07 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:24.325 * Looking for test storage... 00:10:24.326 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:24.326 09:10:08 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:24.326 09:10:08 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:24.326 09:10:08 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:24.326 09:10:08 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@345 -- # : 1 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.326 09:10:08 alias_rpc -- scripts/common.sh@368 -- # return 0 00:10:24.326 09:10:08 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.326 09:10:08 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:24.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.326 --rc genhtml_branch_coverage=1 00:10:24.326 --rc genhtml_function_coverage=1 00:10:24.326 --rc genhtml_legend=1 00:10:24.326 --rc geninfo_all_blocks=1 00:10:24.326 --rc geninfo_unexecuted_blocks=1 00:10:24.326 00:10:24.326 ' 00:10:24.326 09:10:08 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:24.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.326 --rc genhtml_branch_coverage=1 00:10:24.326 --rc genhtml_function_coverage=1 00:10:24.326 --rc genhtml_legend=1 00:10:24.326 --rc geninfo_all_blocks=1 00:10:24.326 --rc geninfo_unexecuted_blocks=1 00:10:24.326 00:10:24.326 ' 00:10:24.326 09:10:08 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:24.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.326 --rc genhtml_branch_coverage=1 00:10:24.326 --rc genhtml_function_coverage=1 00:10:24.326 --rc genhtml_legend=1 00:10:24.326 --rc geninfo_all_blocks=1 00:10:24.326 --rc geninfo_unexecuted_blocks=1 00:10:24.326 00:10:24.326 ' 00:10:24.326 09:10:08 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:24.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.326 --rc genhtml_branch_coverage=1 00:10:24.326 --rc genhtml_function_coverage=1 00:10:24.326 --rc genhtml_legend=1 00:10:24.326 --rc geninfo_all_blocks=1 00:10:24.326 --rc geninfo_unexecuted_blocks=1 00:10:24.326 00:10:24.326 ' 00:10:24.326 09:10:08 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:24.326 09:10:08 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57795 00:10:24.326 09:10:08 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:24.326 09:10:08 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57795 00:10:24.326 09:10:08 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57795 ']' 00:10:24.326 09:10:08 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.326 09:10:08 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.326 09:10:08 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.326 09:10:08 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.326 09:10:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.326 [2024-10-15 09:10:08.225004] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:10:24.326 [2024-10-15 09:10:08.225217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57795 ] 00:10:24.585 [2024-10-15 09:10:08.397749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.844 [2024-10-15 09:10:08.544802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.780 09:10:09 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.780 09:10:09 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:25.780 09:10:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:26.039 09:10:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57795 00:10:26.039 09:10:09 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57795 ']' 00:10:26.039 09:10:09 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57795 00:10:26.039 09:10:09 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:10:26.039 09:10:09 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:26.039 09:10:09 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57795 00:10:26.039 09:10:09 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:26.039 09:10:09 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:26.039 killing process with pid 57795 00:10:26.039 09:10:09 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57795' 00:10:26.039 09:10:09 alias_rpc -- common/autotest_common.sh@969 -- # kill 57795 00:10:26.039 09:10:09 alias_rpc -- common/autotest_common.sh@974 -- # wait 57795 00:10:28.570 00:10:28.570 real 0m4.375s 00:10:28.570 user 0m4.445s 00:10:28.570 sys 0m0.764s 00:10:28.570 09:10:12 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:28.570 09:10:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.570 ************************************ 00:10:28.570 END TEST alias_rpc 00:10:28.570 ************************************ 00:10:28.570 09:10:12 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:10:28.570 09:10:12 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:28.570 09:10:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:28.570 09:10:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:28.570 09:10:12 -- common/autotest_common.sh@10 -- # set +x 00:10:28.570 ************************************ 00:10:28.570 START TEST spdkcli_tcp 00:10:28.570 ************************************ 00:10:28.570 09:10:12 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:28.570 * Looking for test storage... 00:10:28.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:28.570 09:10:12 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:28.570 09:10:12 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:10:28.570 09:10:12 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:28.830 09:10:12 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.830 09:10:12 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:10:28.830 09:10:12 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.830 09:10:12 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:28.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.830 --rc genhtml_branch_coverage=1 00:10:28.830 --rc genhtml_function_coverage=1 00:10:28.830 --rc genhtml_legend=1 00:10:28.830 --rc geninfo_all_blocks=1 00:10:28.830 --rc geninfo_unexecuted_blocks=1 00:10:28.830 00:10:28.830 ' 00:10:28.830 09:10:12 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:28.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.830 --rc genhtml_branch_coverage=1 00:10:28.830 --rc genhtml_function_coverage=1 00:10:28.830 --rc genhtml_legend=1 00:10:28.830 --rc geninfo_all_blocks=1 00:10:28.830 --rc geninfo_unexecuted_blocks=1 00:10:28.830 00:10:28.830 ' 00:10:28.830 09:10:12 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:28.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.830 --rc genhtml_branch_coverage=1 00:10:28.830 --rc genhtml_function_coverage=1 00:10:28.830 --rc genhtml_legend=1 00:10:28.830 --rc geninfo_all_blocks=1 00:10:28.830 --rc geninfo_unexecuted_blocks=1 00:10:28.830 00:10:28.830 ' 00:10:28.830 09:10:12 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:28.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.830 --rc genhtml_branch_coverage=1 00:10:28.830 --rc genhtml_function_coverage=1 00:10:28.830 --rc genhtml_legend=1 00:10:28.830 --rc geninfo_all_blocks=1 00:10:28.830 --rc geninfo_unexecuted_blocks=1 00:10:28.830 00:10:28.830 ' 00:10:28.830 09:10:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:28.830 09:10:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:28.830 09:10:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:28.830 09:10:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:28.830 09:10:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:28.830 09:10:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:28.830 09:10:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:28.830 09:10:12 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:28.830 09:10:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:28.830 09:10:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57902 00:10:28.830 09:10:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:28.830 09:10:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57902 00:10:28.830 09:10:12 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57902 ']' 00:10:28.830 09:10:12 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.830 09:10:12 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:28.830 09:10:12 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.830 09:10:12 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:28.830 09:10:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:28.830 [2024-10-15 09:10:12.707561] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:10:28.830 [2024-10-15 09:10:12.708039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57902 ] 00:10:29.089 [2024-10-15 09:10:12.885368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:29.347 [2024-10-15 09:10:13.043854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.347 [2024-10-15 09:10:13.043872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.281 09:10:13 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:30.281 09:10:13 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:10:30.281 09:10:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57919 00:10:30.281 09:10:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:30.281 09:10:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:30.540 [ 00:10:30.540 "bdev_malloc_delete", 00:10:30.540 "bdev_malloc_create", 00:10:30.540 "bdev_null_resize", 00:10:30.540 "bdev_null_delete", 00:10:30.540 "bdev_null_create", 00:10:30.540 "bdev_nvme_cuse_unregister", 00:10:30.540 "bdev_nvme_cuse_register", 00:10:30.540 "bdev_opal_new_user", 00:10:30.540 "bdev_opal_set_lock_state", 00:10:30.540 "bdev_opal_delete", 00:10:30.540 "bdev_opal_get_info", 00:10:30.540 "bdev_opal_create", 00:10:30.540 "bdev_nvme_opal_revert", 00:10:30.540 "bdev_nvme_opal_init", 00:10:30.540 "bdev_nvme_send_cmd", 00:10:30.540 "bdev_nvme_set_keys", 00:10:30.540 "bdev_nvme_get_path_iostat", 00:10:30.540 "bdev_nvme_get_mdns_discovery_info", 00:10:30.540 "bdev_nvme_stop_mdns_discovery", 00:10:30.540 "bdev_nvme_start_mdns_discovery", 00:10:30.540 "bdev_nvme_set_multipath_policy", 00:10:30.540 "bdev_nvme_set_preferred_path", 00:10:30.540 "bdev_nvme_get_io_paths", 00:10:30.540 "bdev_nvme_remove_error_injection", 00:10:30.540 "bdev_nvme_add_error_injection", 00:10:30.540 "bdev_nvme_get_discovery_info", 00:10:30.540 "bdev_nvme_stop_discovery", 00:10:30.540 "bdev_nvme_start_discovery", 00:10:30.540 "bdev_nvme_get_controller_health_info", 00:10:30.540 "bdev_nvme_disable_controller", 00:10:30.540 "bdev_nvme_enable_controller", 00:10:30.540 "bdev_nvme_reset_controller", 00:10:30.540 "bdev_nvme_get_transport_statistics", 00:10:30.540 "bdev_nvme_apply_firmware", 00:10:30.540 "bdev_nvme_detach_controller", 00:10:30.540 "bdev_nvme_get_controllers", 00:10:30.540 "bdev_nvme_attach_controller", 00:10:30.540 "bdev_nvme_set_hotplug", 00:10:30.540 "bdev_nvme_set_options", 00:10:30.540 "bdev_passthru_delete", 00:10:30.540 "bdev_passthru_create", 00:10:30.540 "bdev_lvol_set_parent_bdev", 00:10:30.540 "bdev_lvol_set_parent", 00:10:30.540 "bdev_lvol_check_shallow_copy", 00:10:30.540 "bdev_lvol_start_shallow_copy", 00:10:30.540 "bdev_lvol_grow_lvstore", 00:10:30.540 "bdev_lvol_get_lvols", 00:10:30.540 "bdev_lvol_get_lvstores", 00:10:30.540 "bdev_lvol_delete", 00:10:30.540 "bdev_lvol_set_read_only", 00:10:30.540 "bdev_lvol_resize", 00:10:30.540 "bdev_lvol_decouple_parent", 00:10:30.540 "bdev_lvol_inflate", 00:10:30.540 "bdev_lvol_rename", 00:10:30.540 "bdev_lvol_clone_bdev", 00:10:30.540 "bdev_lvol_clone", 00:10:30.540 "bdev_lvol_snapshot", 00:10:30.540 "bdev_lvol_create", 00:10:30.540 "bdev_lvol_delete_lvstore", 00:10:30.540 "bdev_lvol_rename_lvstore", 00:10:30.540 "bdev_lvol_create_lvstore", 00:10:30.540 "bdev_raid_set_options", 00:10:30.540 "bdev_raid_remove_base_bdev", 00:10:30.540 "bdev_raid_add_base_bdev", 00:10:30.540 "bdev_raid_delete", 00:10:30.540 "bdev_raid_create", 00:10:30.540 "bdev_raid_get_bdevs", 00:10:30.540 "bdev_error_inject_error", 00:10:30.540 "bdev_error_delete", 00:10:30.540 "bdev_error_create", 00:10:30.540 "bdev_split_delete", 00:10:30.540 "bdev_split_create", 00:10:30.540 "bdev_delay_delete", 00:10:30.540 "bdev_delay_create", 00:10:30.540 "bdev_delay_update_latency", 00:10:30.540 "bdev_zone_block_delete", 00:10:30.540 "bdev_zone_block_create", 00:10:30.540 "blobfs_create", 00:10:30.540 "blobfs_detect", 00:10:30.540 "blobfs_set_cache_size", 00:10:30.540 "bdev_aio_delete", 00:10:30.540 "bdev_aio_rescan", 00:10:30.540 "bdev_aio_create", 00:10:30.540 "bdev_ftl_set_property", 00:10:30.540 "bdev_ftl_get_properties", 00:10:30.540 "bdev_ftl_get_stats", 00:10:30.540 "bdev_ftl_unmap", 00:10:30.540 "bdev_ftl_unload", 00:10:30.540 "bdev_ftl_delete", 00:10:30.540 "bdev_ftl_load", 00:10:30.540 "bdev_ftl_create", 00:10:30.540 "bdev_virtio_attach_controller", 00:10:30.540 "bdev_virtio_scsi_get_devices", 00:10:30.540 "bdev_virtio_detach_controller", 00:10:30.540 "bdev_virtio_blk_set_hotplug", 00:10:30.540 "bdev_iscsi_delete", 00:10:30.540 "bdev_iscsi_create", 00:10:30.540 "bdev_iscsi_set_options", 00:10:30.540 "accel_error_inject_error", 00:10:30.540 "ioat_scan_accel_module", 00:10:30.540 "dsa_scan_accel_module", 00:10:30.540 "iaa_scan_accel_module", 00:10:30.540 "keyring_file_remove_key", 00:10:30.540 "keyring_file_add_key", 00:10:30.540 "keyring_linux_set_options", 00:10:30.540 "fsdev_aio_delete", 00:10:30.540 "fsdev_aio_create", 00:10:30.540 "iscsi_get_histogram", 00:10:30.540 "iscsi_enable_histogram", 00:10:30.540 "iscsi_set_options", 00:10:30.540 "iscsi_get_auth_groups", 00:10:30.540 "iscsi_auth_group_remove_secret", 00:10:30.540 "iscsi_auth_group_add_secret", 00:10:30.540 "iscsi_delete_auth_group", 00:10:30.540 "iscsi_create_auth_group", 00:10:30.540 "iscsi_set_discovery_auth", 00:10:30.540 "iscsi_get_options", 00:10:30.540 "iscsi_target_node_request_logout", 00:10:30.540 "iscsi_target_node_set_redirect", 00:10:30.540 "iscsi_target_node_set_auth", 00:10:30.540 "iscsi_target_node_add_lun", 00:10:30.540 "iscsi_get_stats", 00:10:30.540 "iscsi_get_connections", 00:10:30.540 "iscsi_portal_group_set_auth", 00:10:30.540 "iscsi_start_portal_group", 00:10:30.540 "iscsi_delete_portal_group", 00:10:30.540 "iscsi_create_portal_group", 00:10:30.540 "iscsi_get_portal_groups", 00:10:30.540 "iscsi_delete_target_node", 00:10:30.540 "iscsi_target_node_remove_pg_ig_maps", 00:10:30.540 "iscsi_target_node_add_pg_ig_maps", 00:10:30.540 "iscsi_create_target_node", 00:10:30.540 "iscsi_get_target_nodes", 00:10:30.540 "iscsi_delete_initiator_group", 00:10:30.540 "iscsi_initiator_group_remove_initiators", 00:10:30.540 "iscsi_initiator_group_add_initiators", 00:10:30.540 "iscsi_create_initiator_group", 00:10:30.540 "iscsi_get_initiator_groups", 00:10:30.540 "nvmf_set_crdt", 00:10:30.540 "nvmf_set_config", 00:10:30.540 "nvmf_set_max_subsystems", 00:10:30.540 "nvmf_stop_mdns_prr", 00:10:30.540 "nvmf_publish_mdns_prr", 00:10:30.540 "nvmf_subsystem_get_listeners", 00:10:30.540 "nvmf_subsystem_get_qpairs", 00:10:30.540 "nvmf_subsystem_get_controllers", 00:10:30.540 "nvmf_get_stats", 00:10:30.540 "nvmf_get_transports", 00:10:30.540 "nvmf_create_transport", 00:10:30.540 "nvmf_get_targets", 00:10:30.540 "nvmf_delete_target", 00:10:30.540 "nvmf_create_target", 00:10:30.540 "nvmf_subsystem_allow_any_host", 00:10:30.540 "nvmf_subsystem_set_keys", 00:10:30.540 "nvmf_subsystem_remove_host", 00:10:30.540 "nvmf_subsystem_add_host", 00:10:30.540 "nvmf_ns_remove_host", 00:10:30.540 "nvmf_ns_add_host", 00:10:30.540 "nvmf_subsystem_remove_ns", 00:10:30.540 "nvmf_subsystem_set_ns_ana_group", 00:10:30.540 "nvmf_subsystem_add_ns", 00:10:30.540 "nvmf_subsystem_listener_set_ana_state", 00:10:30.540 "nvmf_discovery_get_referrals", 00:10:30.540 "nvmf_discovery_remove_referral", 00:10:30.540 "nvmf_discovery_add_referral", 00:10:30.540 "nvmf_subsystem_remove_listener", 00:10:30.540 "nvmf_subsystem_add_listener", 00:10:30.540 "nvmf_delete_subsystem", 00:10:30.540 "nvmf_create_subsystem", 00:10:30.540 "nvmf_get_subsystems", 00:10:30.540 "env_dpdk_get_mem_stats", 00:10:30.540 "nbd_get_disks", 00:10:30.540 "nbd_stop_disk", 00:10:30.540 "nbd_start_disk", 00:10:30.540 "ublk_recover_disk", 00:10:30.540 "ublk_get_disks", 00:10:30.540 "ublk_stop_disk", 00:10:30.540 "ublk_start_disk", 00:10:30.540 "ublk_destroy_target", 00:10:30.540 "ublk_create_target", 00:10:30.540 "virtio_blk_create_transport", 00:10:30.540 "virtio_blk_get_transports", 00:10:30.540 "vhost_controller_set_coalescing", 00:10:30.540 "vhost_get_controllers", 00:10:30.540 "vhost_delete_controller", 00:10:30.540 "vhost_create_blk_controller", 00:10:30.540 "vhost_scsi_controller_remove_target", 00:10:30.540 "vhost_scsi_controller_add_target", 00:10:30.540 "vhost_start_scsi_controller", 00:10:30.540 "vhost_create_scsi_controller", 00:10:30.540 "thread_set_cpumask", 00:10:30.540 "scheduler_set_options", 00:10:30.540 "framework_get_governor", 00:10:30.540 "framework_get_scheduler", 00:10:30.540 "framework_set_scheduler", 00:10:30.540 "framework_get_reactors", 00:10:30.540 "thread_get_io_channels", 00:10:30.540 "thread_get_pollers", 00:10:30.540 "thread_get_stats", 00:10:30.540 "framework_monitor_context_switch", 00:10:30.540 "spdk_kill_instance", 00:10:30.540 "log_enable_timestamps", 00:10:30.540 "log_get_flags", 00:10:30.540 "log_clear_flag", 00:10:30.540 "log_set_flag", 00:10:30.540 "log_get_level", 00:10:30.540 "log_set_level", 00:10:30.540 "log_get_print_level", 00:10:30.540 "log_set_print_level", 00:10:30.540 "framework_enable_cpumask_locks", 00:10:30.540 "framework_disable_cpumask_locks", 00:10:30.540 "framework_wait_init", 00:10:30.540 "framework_start_init", 00:10:30.540 "scsi_get_devices", 00:10:30.540 "bdev_get_histogram", 00:10:30.540 "bdev_enable_histogram", 00:10:30.540 "bdev_set_qos_limit", 00:10:30.540 "bdev_set_qd_sampling_period", 00:10:30.540 "bdev_get_bdevs", 00:10:30.540 "bdev_reset_iostat", 00:10:30.540 "bdev_get_iostat", 00:10:30.540 "bdev_examine", 00:10:30.540 "bdev_wait_for_examine", 00:10:30.540 "bdev_set_options", 00:10:30.540 "accel_get_stats", 00:10:30.540 "accel_set_options", 00:10:30.540 "accel_set_driver", 00:10:30.540 "accel_crypto_key_destroy", 00:10:30.540 "accel_crypto_keys_get", 00:10:30.540 "accel_crypto_key_create", 00:10:30.540 "accel_assign_opc", 00:10:30.540 "accel_get_module_info", 00:10:30.540 "accel_get_opc_assignments", 00:10:30.540 "vmd_rescan", 00:10:30.540 "vmd_remove_device", 00:10:30.540 "vmd_enable", 00:10:30.541 "sock_get_default_impl", 00:10:30.541 "sock_set_default_impl", 00:10:30.541 "sock_impl_set_options", 00:10:30.541 "sock_impl_get_options", 00:10:30.541 "iobuf_get_stats", 00:10:30.541 "iobuf_set_options", 00:10:30.541 "keyring_get_keys", 00:10:30.541 "framework_get_pci_devices", 00:10:30.541 "framework_get_config", 00:10:30.541 "framework_get_subsystems", 00:10:30.541 "fsdev_set_opts", 00:10:30.541 "fsdev_get_opts", 00:10:30.541 "trace_get_info", 00:10:30.541 "trace_get_tpoint_group_mask", 00:10:30.541 "trace_disable_tpoint_group", 00:10:30.541 "trace_enable_tpoint_group", 00:10:30.541 "trace_clear_tpoint_mask", 00:10:30.541 "trace_set_tpoint_mask", 00:10:30.541 "notify_get_notifications", 00:10:30.541 "notify_get_types", 00:10:30.541 "spdk_get_version", 00:10:30.541 "rpc_get_methods" 00:10:30.541 ] 00:10:30.541 09:10:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:30.541 09:10:14 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:30.541 09:10:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:30.541 09:10:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:30.541 09:10:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57902 00:10:30.541 09:10:14 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57902 ']' 00:10:30.541 09:10:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57902 00:10:30.541 09:10:14 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:10:30.541 09:10:14 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:30.541 09:10:14 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57902 00:10:30.541 09:10:14 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:30.541 killing process with pid 57902 00:10:30.541 09:10:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:30.541 09:10:14 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57902' 00:10:30.541 09:10:14 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57902 00:10:30.541 09:10:14 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57902 00:10:33.072 00:10:33.073 real 0m4.356s 00:10:33.073 user 0m7.797s 00:10:33.073 sys 0m0.733s 00:10:33.073 09:10:16 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.073 09:10:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:33.073 ************************************ 00:10:33.073 END TEST spdkcli_tcp 00:10:33.073 ************************************ 00:10:33.073 09:10:16 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:33.073 09:10:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:33.073 09:10:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.073 09:10:16 -- common/autotest_common.sh@10 -- # set +x 00:10:33.073 ************************************ 00:10:33.073 START TEST dpdk_mem_utility 00:10:33.073 ************************************ 00:10:33.073 09:10:16 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:33.073 * Looking for test storage... 00:10:33.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:33.073 09:10:16 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:33.073 09:10:16 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:10:33.073 09:10:16 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:33.073 09:10:16 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.073 09:10:16 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:10:33.073 09:10:16 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.073 09:10:16 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:33.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.073 --rc genhtml_branch_coverage=1 00:10:33.073 --rc genhtml_function_coverage=1 00:10:33.073 --rc genhtml_legend=1 00:10:33.073 --rc geninfo_all_blocks=1 00:10:33.073 --rc geninfo_unexecuted_blocks=1 00:10:33.073 00:10:33.073 ' 00:10:33.073 09:10:16 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:33.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.073 --rc genhtml_branch_coverage=1 00:10:33.073 --rc genhtml_function_coverage=1 00:10:33.073 --rc genhtml_legend=1 00:10:33.073 --rc geninfo_all_blocks=1 00:10:33.073 --rc geninfo_unexecuted_blocks=1 00:10:33.073 00:10:33.073 ' 00:10:33.073 09:10:16 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:33.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.073 --rc genhtml_branch_coverage=1 00:10:33.073 --rc genhtml_function_coverage=1 00:10:33.073 --rc genhtml_legend=1 00:10:33.073 --rc geninfo_all_blocks=1 00:10:33.073 --rc geninfo_unexecuted_blocks=1 00:10:33.073 00:10:33.073 ' 00:10:33.073 09:10:16 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:33.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.073 --rc genhtml_branch_coverage=1 00:10:33.073 --rc genhtml_function_coverage=1 00:10:33.073 --rc genhtml_legend=1 00:10:33.073 --rc geninfo_all_blocks=1 00:10:33.073 --rc geninfo_unexecuted_blocks=1 00:10:33.073 00:10:33.073 ' 00:10:33.073 09:10:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:33.073 09:10:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58026 00:10:33.073 09:10:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58026 00:10:33.073 09:10:16 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58026 ']' 00:10:33.073 09:10:16 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.073 09:10:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:33.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.073 09:10:16 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:33.073 09:10:16 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.073 09:10:16 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:33.073 09:10:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:33.332 [2024-10-15 09:10:17.077943] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:10:33.332 [2024-10-15 09:10:17.078201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58026 ] 00:10:33.332 [2024-10-15 09:10:17.256784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.591 [2024-10-15 09:10:17.407666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.590 09:10:18 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:34.590 09:10:18 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:10:34.590 09:10:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:34.590 09:10:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:34.590 09:10:18 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.590 09:10:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:34.590 { 00:10:34.590 "filename": "/tmp/spdk_mem_dump.txt" 00:10:34.590 } 00:10:34.590 09:10:18 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.590 09:10:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:34.590 DPDK memory size 816.000000 MiB in 1 heap(s) 00:10:34.590 1 heaps totaling size 816.000000 MiB 00:10:34.590 size: 816.000000 MiB heap id: 0 00:10:34.590 end heaps---------- 00:10:34.590 9 mempools totaling size 595.772034 MiB 00:10:34.590 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:34.590 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:34.590 size: 92.545471 MiB name: bdev_io_58026 00:10:34.590 size: 50.003479 MiB name: msgpool_58026 00:10:34.590 size: 36.509338 MiB name: fsdev_io_58026 00:10:34.590 size: 21.763794 MiB name: PDU_Pool 00:10:34.590 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:34.590 size: 4.133484 MiB name: evtpool_58026 00:10:34.590 size: 0.026123 MiB name: Session_Pool 00:10:34.590 end mempools------- 00:10:34.590 6 memzones totaling size 4.142822 MiB 00:10:34.590 size: 1.000366 MiB name: RG_ring_0_58026 00:10:34.590 size: 1.000366 MiB name: RG_ring_1_58026 00:10:34.590 size: 1.000366 MiB name: RG_ring_4_58026 00:10:34.590 size: 1.000366 MiB name: RG_ring_5_58026 00:10:34.590 size: 0.125366 MiB name: RG_ring_2_58026 00:10:34.590 size: 0.015991 MiB name: RG_ring_3_58026 00:10:34.590 end memzones------- 00:10:34.590 09:10:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:34.851 heap id: 0 total size: 816.000000 MiB number of busy elements: 309 number of free elements: 18 00:10:34.851 list of free elements. size: 16.792847 MiB 00:10:34.851 element at address: 0x200006400000 with size: 1.995972 MiB 00:10:34.851 element at address: 0x20000a600000 with size: 1.995972 MiB 00:10:34.851 element at address: 0x200003e00000 with size: 1.991028 MiB 00:10:34.851 element at address: 0x200018d00040 with size: 0.999939 MiB 00:10:34.851 element at address: 0x200019100040 with size: 0.999939 MiB 00:10:34.851 element at address: 0x200019200000 with size: 0.999084 MiB 00:10:34.851 element at address: 0x200031e00000 with size: 0.994324 MiB 00:10:34.851 element at address: 0x200000400000 with size: 0.992004 MiB 00:10:34.851 element at address: 0x200018a00000 with size: 0.959656 MiB 00:10:34.851 element at address: 0x200019500040 with size: 0.936401 MiB 00:10:34.851 element at address: 0x200000200000 with size: 0.716980 MiB 00:10:34.851 element at address: 0x20001ac00000 with size: 0.563171 MiB 00:10:34.851 element at address: 0x200000c00000 with size: 0.490173 MiB 00:10:34.851 element at address: 0x200018e00000 with size: 0.487976 MiB 00:10:34.851 element at address: 0x200019600000 with size: 0.485413 MiB 00:10:34.851 element at address: 0x200012c00000 with size: 0.443481 MiB 00:10:34.851 element at address: 0x200028000000 with size: 0.390442 MiB 00:10:34.851 element at address: 0x200000800000 with size: 0.350891 MiB 00:10:34.851 list of standard malloc elements. size: 199.286255 MiB 00:10:34.851 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:10:34.851 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:10:34.851 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:10:34.851 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:10:34.851 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:10:34.851 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:10:34.851 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:10:34.851 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:10:34.851 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:10:34.851 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:10:34.851 element at address: 0x200012bff040 with size: 0.000305 MiB 00:10:34.851 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:10:34.851 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200000cff000 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:10:34.851 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200012bff180 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200012bff280 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200012bff380 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200012bff480 with size: 0.000244 MiB 00:10:34.851 element at address: 0x200012bff580 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012bff680 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012bff780 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012bff880 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012bff980 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012c71880 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012c71980 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012c72080 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012c72180 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:10:34.852 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:10:34.852 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200028063f40 with size: 0.000244 MiB 00:10:34.852 element at address: 0x200028064040 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806af80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806b080 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806b180 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806b280 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806b380 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806b480 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806b580 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806b680 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806b780 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806b880 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806b980 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806be80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806c080 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806c180 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806c280 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806c380 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806c480 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806c580 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806c680 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806c780 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806c880 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806c980 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806d080 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806d180 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806d280 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806d380 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806d480 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806d580 with size: 0.000244 MiB 00:10:34.852 element at address: 0x20002806d680 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806d780 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806d880 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806d980 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806da80 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806db80 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806de80 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806df80 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806e080 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806e180 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806e280 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806e380 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806e480 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806e580 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806e680 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806e780 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806e880 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806e980 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806f080 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806f180 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806f280 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806f380 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806f480 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806f580 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806f680 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806f780 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806f880 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806f980 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:10:34.853 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:10:34.853 list of memzone associated elements. size: 599.920898 MiB 00:10:34.853 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:10:34.853 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:34.853 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:10:34.853 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:34.853 element at address: 0x200012df4740 with size: 92.045105 MiB 00:10:34.853 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58026_0 00:10:34.853 element at address: 0x200000dff340 with size: 48.003113 MiB 00:10:34.853 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58026_0 00:10:34.853 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:10:34.853 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58026_0 00:10:34.853 element at address: 0x2000197be900 with size: 20.255615 MiB 00:10:34.853 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:34.853 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:10:34.853 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:34.853 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:10:34.853 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58026_0 00:10:34.853 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:10:34.853 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58026 00:10:34.853 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:10:34.853 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58026 00:10:34.853 element at address: 0x200018efde00 with size: 1.008179 MiB 00:10:34.853 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:34.853 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:10:34.853 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:34.853 element at address: 0x200018afde00 with size: 1.008179 MiB 00:10:34.853 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:34.853 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:10:34.853 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:34.853 element at address: 0x200000cff100 with size: 1.000549 MiB 00:10:34.853 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58026 00:10:34.853 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:10:34.853 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58026 00:10:34.853 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:10:34.853 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58026 00:10:34.853 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:10:34.853 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58026 00:10:34.853 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:10:34.853 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58026 00:10:34.853 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:10:34.853 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58026 00:10:34.853 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:10:34.853 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:34.853 element at address: 0x200012c72280 with size: 0.500549 MiB 00:10:34.853 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:34.853 element at address: 0x20001967c440 with size: 0.250549 MiB 00:10:34.853 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:34.853 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:10:34.853 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58026 00:10:34.853 element at address: 0x20000085df80 with size: 0.125549 MiB 00:10:34.853 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58026 00:10:34.853 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:10:34.853 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:34.853 element at address: 0x200028064140 with size: 0.023804 MiB 00:10:34.853 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:34.853 element at address: 0x200000859d40 with size: 0.016174 MiB 00:10:34.853 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58026 00:10:34.853 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:10:34.853 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:34.853 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:10:34.853 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58026 00:10:34.853 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:10:34.853 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58026 00:10:34.853 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:10:34.853 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58026 00:10:34.853 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:10:34.853 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:34.853 09:10:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:34.853 09:10:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58026 00:10:34.853 09:10:18 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58026 ']' 00:10:34.853 09:10:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58026 00:10:34.853 09:10:18 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:10:34.853 09:10:18 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.853 09:10:18 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58026 00:10:34.853 killing process with pid 58026 00:10:34.853 09:10:18 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:34.853 09:10:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:34.853 09:10:18 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58026' 00:10:34.853 09:10:18 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58026 00:10:34.853 09:10:18 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58026 00:10:37.419 00:10:37.419 real 0m4.374s 00:10:37.419 user 0m4.257s 00:10:37.419 sys 0m0.756s 00:10:37.419 09:10:21 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.419 09:10:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:37.419 ************************************ 00:10:37.419 END TEST dpdk_mem_utility 00:10:37.419 ************************************ 00:10:37.419 09:10:21 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:37.419 09:10:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:37.419 09:10:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.419 09:10:21 -- common/autotest_common.sh@10 -- # set +x 00:10:37.419 ************************************ 00:10:37.419 START TEST event 00:10:37.419 ************************************ 00:10:37.419 09:10:21 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:37.419 * Looking for test storage... 00:10:37.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:37.419 09:10:21 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:37.419 09:10:21 event -- common/autotest_common.sh@1691 -- # lcov --version 00:10:37.419 09:10:21 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:37.678 09:10:21 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:37.678 09:10:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.678 09:10:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.678 09:10:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.678 09:10:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.678 09:10:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.678 09:10:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.678 09:10:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.678 09:10:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.678 09:10:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.678 09:10:21 event -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.678 09:10:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.678 09:10:21 event -- scripts/common.sh@344 -- # case "$op" in 00:10:37.678 09:10:21 event -- scripts/common.sh@345 -- # : 1 00:10:37.678 09:10:21 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.678 09:10:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.678 09:10:21 event -- scripts/common.sh@365 -- # decimal 1 00:10:37.678 09:10:21 event -- scripts/common.sh@353 -- # local d=1 00:10:37.678 09:10:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.678 09:10:21 event -- scripts/common.sh@355 -- # echo 1 00:10:37.678 09:10:21 event -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.678 09:10:21 event -- scripts/common.sh@366 -- # decimal 2 00:10:37.678 09:10:21 event -- scripts/common.sh@353 -- # local d=2 00:10:37.678 09:10:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.678 09:10:21 event -- scripts/common.sh@355 -- # echo 2 00:10:37.678 09:10:21 event -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.678 09:10:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.678 09:10:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.678 09:10:21 event -- scripts/common.sh@368 -- # return 0 00:10:37.678 09:10:21 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.678 09:10:21 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:37.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.678 --rc genhtml_branch_coverage=1 00:10:37.678 --rc genhtml_function_coverage=1 00:10:37.678 --rc genhtml_legend=1 00:10:37.678 --rc geninfo_all_blocks=1 00:10:37.678 --rc geninfo_unexecuted_blocks=1 00:10:37.678 00:10:37.678 ' 00:10:37.678 09:10:21 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:37.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.678 --rc genhtml_branch_coverage=1 00:10:37.678 --rc genhtml_function_coverage=1 00:10:37.678 --rc genhtml_legend=1 00:10:37.678 --rc geninfo_all_blocks=1 00:10:37.678 --rc geninfo_unexecuted_blocks=1 00:10:37.678 00:10:37.678 ' 00:10:37.678 09:10:21 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:37.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.678 --rc genhtml_branch_coverage=1 00:10:37.678 --rc genhtml_function_coverage=1 00:10:37.678 --rc genhtml_legend=1 00:10:37.678 --rc geninfo_all_blocks=1 00:10:37.678 --rc geninfo_unexecuted_blocks=1 00:10:37.678 00:10:37.678 ' 00:10:37.678 09:10:21 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:37.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.678 --rc genhtml_branch_coverage=1 00:10:37.678 --rc genhtml_function_coverage=1 00:10:37.678 --rc genhtml_legend=1 00:10:37.678 --rc geninfo_all_blocks=1 00:10:37.678 --rc geninfo_unexecuted_blocks=1 00:10:37.678 00:10:37.678 ' 00:10:37.678 09:10:21 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:37.678 09:10:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:37.678 09:10:21 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:37.678 09:10:21 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:10:37.678 09:10:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.678 09:10:21 event -- common/autotest_common.sh@10 -- # set +x 00:10:37.678 ************************************ 00:10:37.678 START TEST event_perf 00:10:37.678 ************************************ 00:10:37.678 09:10:21 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:37.678 Running I/O for 1 seconds...[2024-10-15 09:10:21.438714] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:10:37.678 [2024-10-15 09:10:21.439616] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58140 ] 00:10:37.937 [2024-10-15 09:10:21.624248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.937 [2024-10-15 09:10:21.840233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.937 [2024-10-15 09:10:21.840441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.937 Running I/O for 1 seconds...[2024-10-15 09:10:21.840600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.937 [2024-10-15 09:10:21.840595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.314 00:10:39.314 lcore 0: 124470 00:10:39.314 lcore 1: 124472 00:10:39.314 lcore 2: 124475 00:10:39.314 lcore 3: 124468 00:10:39.314 done. 00:10:39.314 00:10:39.314 real 0m1.725s 00:10:39.314 user 0m4.440s 00:10:39.314 sys 0m0.151s 00:10:39.314 09:10:23 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.314 09:10:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:39.314 ************************************ 00:10:39.314 END TEST event_perf 00:10:39.314 ************************************ 00:10:39.314 09:10:23 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:39.314 09:10:23 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:39.314 09:10:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.314 09:10:23 event -- common/autotest_common.sh@10 -- # set +x 00:10:39.314 ************************************ 00:10:39.314 START TEST event_reactor 00:10:39.314 ************************************ 00:10:39.314 09:10:23 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:39.314 [2024-10-15 09:10:23.216027] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:10:39.314 [2024-10-15 09:10:23.216200] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58179 ] 00:10:39.573 [2024-10-15 09:10:23.390444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.832 [2024-10-15 09:10:23.566023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.209 test_start 00:10:41.209 oneshot 00:10:41.209 tick 100 00:10:41.209 tick 100 00:10:41.209 tick 250 00:10:41.209 tick 100 00:10:41.209 tick 100 00:10:41.209 tick 100 00:10:41.209 tick 250 00:10:41.209 tick 500 00:10:41.209 tick 100 00:10:41.209 tick 100 00:10:41.209 tick 250 00:10:41.209 tick 100 00:10:41.209 tick 100 00:10:41.209 test_end 00:10:41.209 00:10:41.209 real 0m1.653s 00:10:41.209 user 0m1.438s 00:10:41.209 sys 0m0.105s 00:10:41.209 09:10:24 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:41.209 ************************************ 00:10:41.209 END TEST event_reactor 00:10:41.209 ************************************ 00:10:41.209 09:10:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:41.209 09:10:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:41.209 09:10:24 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:41.209 09:10:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:41.209 09:10:24 event -- common/autotest_common.sh@10 -- # set +x 00:10:41.209 ************************************ 00:10:41.209 START TEST event_reactor_perf 00:10:41.209 ************************************ 00:10:41.209 09:10:24 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:41.209 [2024-10-15 09:10:24.930666] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:10:41.209 [2024-10-15 09:10:24.931364] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58221 ] 00:10:41.209 [2024-10-15 09:10:25.107106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.468 [2024-10-15 09:10:25.258747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.845 test_start 00:10:42.845 test_end 00:10:42.845 Performance: 272374 events per second 00:10:42.845 00:10:42.845 real 0m1.632s 00:10:42.845 user 0m1.412s 00:10:42.845 sys 0m0.109s 00:10:42.845 09:10:26 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.845 ************************************ 00:10:42.845 END TEST event_reactor_perf 00:10:42.845 ************************************ 00:10:42.845 09:10:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:42.845 09:10:26 event -- event/event.sh@49 -- # uname -s 00:10:42.845 09:10:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:42.845 09:10:26 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:42.845 09:10:26 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:42.845 09:10:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.845 09:10:26 event -- common/autotest_common.sh@10 -- # set +x 00:10:42.845 ************************************ 00:10:42.845 START TEST event_scheduler 00:10:42.845 ************************************ 00:10:42.845 09:10:26 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:42.845 * Looking for test storage... 00:10:42.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:42.845 09:10:26 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:42.845 09:10:26 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:10:42.845 09:10:26 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:42.845 09:10:26 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.845 09:10:26 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.104 09:10:26 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.104 09:10:26 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:43.104 09:10:26 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.104 09:10:26 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:43.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.104 --rc genhtml_branch_coverage=1 00:10:43.104 --rc genhtml_function_coverage=1 00:10:43.104 --rc genhtml_legend=1 00:10:43.104 --rc geninfo_all_blocks=1 00:10:43.104 --rc geninfo_unexecuted_blocks=1 00:10:43.104 00:10:43.104 ' 00:10:43.104 09:10:26 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:43.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.104 --rc genhtml_branch_coverage=1 00:10:43.104 --rc genhtml_function_coverage=1 00:10:43.104 --rc genhtml_legend=1 00:10:43.104 --rc geninfo_all_blocks=1 00:10:43.104 --rc geninfo_unexecuted_blocks=1 00:10:43.104 00:10:43.104 ' 00:10:43.104 09:10:26 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:43.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.104 --rc genhtml_branch_coverage=1 00:10:43.104 --rc genhtml_function_coverage=1 00:10:43.104 --rc genhtml_legend=1 00:10:43.104 --rc geninfo_all_blocks=1 00:10:43.104 --rc geninfo_unexecuted_blocks=1 00:10:43.104 00:10:43.104 ' 00:10:43.104 09:10:26 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:43.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.104 --rc genhtml_branch_coverage=1 00:10:43.104 --rc genhtml_function_coverage=1 00:10:43.104 --rc genhtml_legend=1 00:10:43.104 --rc geninfo_all_blocks=1 00:10:43.104 --rc geninfo_unexecuted_blocks=1 00:10:43.104 00:10:43.104 ' 00:10:43.104 09:10:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:43.104 09:10:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58297 00:10:43.105 09:10:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:43.105 09:10:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:43.105 09:10:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58297 00:10:43.105 09:10:26 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58297 ']' 00:10:43.105 09:10:26 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.105 09:10:26 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.105 09:10:26 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.105 09:10:26 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.105 09:10:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:43.105 [2024-10-15 09:10:26.886262] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:10:43.105 [2024-10-15 09:10:26.887503] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58297 ] 00:10:43.363 [2024-10-15 09:10:27.075756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.363 [2024-10-15 09:10:27.236800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.363 [2024-10-15 09:10:27.236958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.363 [2024-10-15 09:10:27.237040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.363 [2024-10-15 09:10:27.237050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.325 09:10:27 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:44.325 09:10:27 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:10:44.325 09:10:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:44.325 09:10:27 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.325 09:10:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:44.325 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:44.325 POWER: Cannot set governor of lcore 0 to userspace 00:10:44.325 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:44.325 POWER: Cannot set governor of lcore 0 to performance 00:10:44.325 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:44.325 POWER: Cannot set governor of lcore 0 to userspace 00:10:44.325 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:44.325 POWER: Cannot set governor of lcore 0 to userspace 00:10:44.325 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:44.325 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:44.325 POWER: Unable to set Power Management Environment for lcore 0 00:10:44.325 [2024-10-15 09:10:27.888068] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:10:44.325 [2024-10-15 09:10:27.888104] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:10:44.325 [2024-10-15 09:10:27.888143] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:44.325 [2024-10-15 09:10:27.888175] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:44.325 [2024-10-15 09:10:27.888188] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:44.325 [2024-10-15 09:10:27.888203] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:44.325 09:10:27 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.325 09:10:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:44.326 09:10:27 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.326 09:10:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:44.326 [2024-10-15 09:10:28.214999] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:44.326 09:10:28 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.326 09:10:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:44.326 09:10:28 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:44.326 09:10:28 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:44.326 09:10:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:44.326 ************************************ 00:10:44.326 START TEST scheduler_create_thread 00:10:44.326 ************************************ 00:10:44.326 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:10:44.326 09:10:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:44.326 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.326 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:44.326 2 00:10:44.326 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.326 09:10:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:44.326 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.326 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:44.585 3 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:44.585 4 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:44.585 5 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:44.585 6 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:44.585 7 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:44.585 8 00:10:44.585 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:44.586 9 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:44.586 10 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.586 09:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:45.964 09:10:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.964 09:10:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:45.964 09:10:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:45.964 09:10:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.964 09:10:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:47.343 ************************************ 00:10:47.343 END TEST scheduler_create_thread 00:10:47.343 ************************************ 00:10:47.343 09:10:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.343 00:10:47.343 real 0m2.621s 00:10:47.343 user 0m0.019s 00:10:47.343 sys 0m0.008s 00:10:47.343 09:10:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.343 09:10:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:47.343 09:10:30 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:47.343 09:10:30 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58297 00:10:47.343 09:10:30 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58297 ']' 00:10:47.343 09:10:30 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58297 00:10:47.343 09:10:30 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:10:47.343 09:10:30 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:47.343 09:10:30 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58297 00:10:47.343 killing process with pid 58297 00:10:47.343 09:10:30 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:10:47.343 09:10:30 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:10:47.343 09:10:30 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58297' 00:10:47.343 09:10:30 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58297 00:10:47.343 09:10:30 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58297 00:10:47.616 [2024-10-15 09:10:31.329543] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:48.552 ************************************ 00:10:48.552 END TEST event_scheduler 00:10:48.552 ************************************ 00:10:48.552 00:10:48.552 real 0m5.844s 00:10:48.552 user 0m10.214s 00:10:48.552 sys 0m0.550s 00:10:48.552 09:10:32 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.552 09:10:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:48.552 09:10:32 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:48.552 09:10:32 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:48.552 09:10:32 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:48.552 09:10:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.552 09:10:32 event -- common/autotest_common.sh@10 -- # set +x 00:10:48.552 ************************************ 00:10:48.552 START TEST app_repeat 00:10:48.552 ************************************ 00:10:48.552 09:10:32 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:10:48.552 09:10:32 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:48.552 09:10:32 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:48.811 09:10:32 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:48.811 09:10:32 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:48.811 09:10:32 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:48.811 09:10:32 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:48.811 09:10:32 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:48.811 09:10:32 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58405 00:10:48.811 09:10:32 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:48.811 09:10:32 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58405' 00:10:48.811 Process app_repeat pid: 58405 00:10:48.811 09:10:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:48.811 09:10:32 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:48.811 spdk_app_start Round 0 00:10:48.811 09:10:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:48.811 09:10:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58405 /var/tmp/spdk-nbd.sock 00:10:48.811 09:10:32 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58405 ']' 00:10:48.811 09:10:32 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:48.811 09:10:32 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:48.811 09:10:32 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:48.811 09:10:32 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.811 09:10:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:48.811 [2024-10-15 09:10:32.549342] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:10:48.811 [2024-10-15 09:10:32.549518] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58405 ] 00:10:48.811 [2024-10-15 09:10:32.730760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:49.070 [2024-10-15 09:10:32.902609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.070 [2024-10-15 09:10:32.902619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.637 09:10:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.637 09:10:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:10:49.637 09:10:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:50.205 Malloc0 00:10:50.205 09:10:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:50.463 Malloc1 00:10:50.463 09:10:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:50.463 09:10:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.463 09:10:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:50.463 09:10:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:50.463 09:10:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:50.463 09:10:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:50.463 09:10:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:50.464 09:10:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.464 09:10:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:50.464 09:10:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:50.464 09:10:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:50.464 09:10:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:50.464 09:10:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:50.464 09:10:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:50.464 09:10:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:50.464 09:10:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:50.723 /dev/nbd0 00:10:50.723 09:10:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:50.723 09:10:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:50.723 09:10:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:50.723 09:10:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:50.723 09:10:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:50.723 09:10:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:50.723 09:10:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:50.723 09:10:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:50.723 09:10:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:50.723 09:10:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:50.723 09:10:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:50.723 1+0 records in 00:10:50.723 1+0 records out 00:10:50.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334724 s, 12.2 MB/s 00:10:50.723 09:10:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:50.723 09:10:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:50.723 09:10:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:50.723 09:10:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:50.723 09:10:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:50.723 09:10:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:50.723 09:10:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:50.723 09:10:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:50.982 /dev/nbd1 00:10:51.241 09:10:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:51.241 09:10:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:51.241 09:10:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:51.241 09:10:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:51.241 09:10:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:51.241 09:10:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:51.241 09:10:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:51.241 09:10:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:51.241 09:10:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:51.241 09:10:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:51.241 09:10:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:51.241 1+0 records in 00:10:51.241 1+0 records out 00:10:51.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376648 s, 10.9 MB/s 00:10:51.241 09:10:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:51.241 09:10:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:51.241 09:10:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:51.241 09:10:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:51.241 09:10:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:51.241 09:10:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:51.241 09:10:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:51.241 09:10:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:51.241 09:10:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:51.241 09:10:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:51.500 { 00:10:51.500 "nbd_device": "/dev/nbd0", 00:10:51.500 "bdev_name": "Malloc0" 00:10:51.500 }, 00:10:51.500 { 00:10:51.500 "nbd_device": "/dev/nbd1", 00:10:51.500 "bdev_name": "Malloc1" 00:10:51.500 } 00:10:51.500 ]' 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:51.500 { 00:10:51.500 "nbd_device": "/dev/nbd0", 00:10:51.500 "bdev_name": "Malloc0" 00:10:51.500 }, 00:10:51.500 { 00:10:51.500 "nbd_device": "/dev/nbd1", 00:10:51.500 "bdev_name": "Malloc1" 00:10:51.500 } 00:10:51.500 ]' 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:51.500 /dev/nbd1' 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:51.500 /dev/nbd1' 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:51.500 256+0 records in 00:10:51.500 256+0 records out 00:10:51.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0067666 s, 155 MB/s 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:51.500 256+0 records in 00:10:51.500 256+0 records out 00:10:51.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297527 s, 35.2 MB/s 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:51.500 256+0 records in 00:10:51.500 256+0 records out 00:10:51.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0339225 s, 30.9 MB/s 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:51.500 09:10:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:51.501 09:10:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:51.501 09:10:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:51.501 09:10:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:51.501 09:10:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:51.501 09:10:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:51.501 09:10:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:51.501 09:10:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:51.760 09:10:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:52.019 09:10:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:52.019 09:10:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:52.019 09:10:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:52.019 09:10:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:52.019 09:10:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:52.019 09:10:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:52.019 09:10:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:52.019 09:10:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:52.019 09:10:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:52.278 09:10:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:52.278 09:10:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:52.279 09:10:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:52.279 09:10:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:52.279 09:10:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:52.279 09:10:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:52.279 09:10:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:52.279 09:10:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:52.279 09:10:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:52.279 09:10:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:52.279 09:10:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:52.537 09:10:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:52.537 09:10:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:52.537 09:10:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:52.537 09:10:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:52.537 09:10:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:52.537 09:10:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:52.537 09:10:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:52.537 09:10:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:52.537 09:10:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:52.537 09:10:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:52.537 09:10:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:52.537 09:10:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:52.537 09:10:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:53.105 09:10:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:54.482 [2024-10-15 09:10:38.074540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:54.482 [2024-10-15 09:10:38.218581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.482 [2024-10-15 09:10:38.218590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.741 [2024-10-15 09:10:38.435333] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:54.741 [2024-10-15 09:10:38.435431] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:56.128 spdk_app_start Round 1 00:10:56.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:56.128 09:10:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:56.128 09:10:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:56.128 09:10:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58405 /var/tmp/spdk-nbd.sock 00:10:56.128 09:10:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58405 ']' 00:10:56.128 09:10:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:56.128 09:10:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:56.128 09:10:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:56.128 09:10:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:56.128 09:10:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:56.386 09:10:40 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:56.386 09:10:40 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:10:56.386 09:10:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:56.953 Malloc0 00:10:56.953 09:10:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:57.212 Malloc1 00:10:57.212 09:10:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:57.212 09:10:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:57.212 09:10:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:57.212 09:10:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:57.212 09:10:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:57.212 09:10:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:57.212 09:10:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:57.212 09:10:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:57.212 09:10:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:57.212 09:10:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:57.212 09:10:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:57.212 09:10:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:57.212 09:10:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:57.212 09:10:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:57.212 09:10:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:57.212 09:10:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:57.471 /dev/nbd0 00:10:57.471 09:10:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:57.471 09:10:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:57.471 09:10:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:57.471 09:10:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:57.471 09:10:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:57.471 09:10:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:57.471 09:10:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:57.471 09:10:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:57.471 09:10:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:57.471 09:10:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:57.471 09:10:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:57.471 1+0 records in 00:10:57.471 1+0 records out 00:10:57.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395273 s, 10.4 MB/s 00:10:57.471 09:10:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:57.471 09:10:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:57.471 09:10:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:57.471 09:10:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:57.471 09:10:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:57.471 09:10:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:57.471 09:10:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:57.471 09:10:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:57.730 /dev/nbd1 00:10:57.730 09:10:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:57.730 09:10:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:57.730 09:10:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:57.730 09:10:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:57.730 09:10:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:57.730 09:10:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:57.730 09:10:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:57.730 09:10:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:57.730 09:10:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:57.730 09:10:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:57.730 09:10:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:57.730 1+0 records in 00:10:57.730 1+0 records out 00:10:57.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411597 s, 10.0 MB/s 00:10:57.730 09:10:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:57.730 09:10:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:57.730 09:10:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:57.730 09:10:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:57.730 09:10:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:57.730 09:10:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:57.730 09:10:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:57.730 09:10:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:57.730 09:10:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:57.730 09:10:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:58.298 { 00:10:58.298 "nbd_device": "/dev/nbd0", 00:10:58.298 "bdev_name": "Malloc0" 00:10:58.298 }, 00:10:58.298 { 00:10:58.298 "nbd_device": "/dev/nbd1", 00:10:58.298 "bdev_name": "Malloc1" 00:10:58.298 } 00:10:58.298 ]' 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:58.298 { 00:10:58.298 "nbd_device": "/dev/nbd0", 00:10:58.298 "bdev_name": "Malloc0" 00:10:58.298 }, 00:10:58.298 { 00:10:58.298 "nbd_device": "/dev/nbd1", 00:10:58.298 "bdev_name": "Malloc1" 00:10:58.298 } 00:10:58.298 ]' 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:58.298 /dev/nbd1' 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:58.298 /dev/nbd1' 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:58.298 256+0 records in 00:10:58.298 256+0 records out 00:10:58.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00817342 s, 128 MB/s 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:58.298 256+0 records in 00:10:58.298 256+0 records out 00:10:58.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316929 s, 33.1 MB/s 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:58.298 256+0 records in 00:10:58.298 256+0 records out 00:10:58.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0341582 s, 30.7 MB/s 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:58.298 09:10:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:58.865 09:10:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:58.865 09:10:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:58.865 09:10:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:58.865 09:10:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:58.865 09:10:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:58.865 09:10:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:58.865 09:10:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:58.865 09:10:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:58.865 09:10:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:58.865 09:10:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:59.124 09:10:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:59.124 09:10:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:59.124 09:10:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:59.124 09:10:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.124 09:10:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.124 09:10:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:59.124 09:10:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:59.124 09:10:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.124 09:10:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:59.124 09:10:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:59.124 09:10:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:59.382 09:10:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:59.382 09:10:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:59.382 09:10:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:59.382 09:10:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:59.382 09:10:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:59.382 09:10:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:59.382 09:10:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:59.382 09:10:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:59.382 09:10:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:59.382 09:10:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:59.382 09:10:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:59.382 09:10:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:59.382 09:10:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:59.949 09:10:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:01.324 [2024-10-15 09:10:44.964214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:01.324 [2024-10-15 09:10:45.113528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.324 [2024-10-15 09:10:45.113538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.582 [2024-10-15 09:10:45.333222] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:01.582 [2024-10-15 09:10:45.333347] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:02.958 spdk_app_start Round 2 00:11:02.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:02.958 09:10:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:02.958 09:10:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:02.958 09:10:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58405 /var/tmp/spdk-nbd.sock 00:11:02.958 09:10:46 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58405 ']' 00:11:02.958 09:10:46 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:02.958 09:10:46 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:02.958 09:10:46 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:02.958 09:10:46 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:02.958 09:10:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:03.217 09:10:47 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:03.217 09:10:47 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:11:03.217 09:10:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:03.784 Malloc0 00:11:03.784 09:10:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:04.044 Malloc1 00:11:04.044 09:10:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:04.044 09:10:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:04.044 09:10:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:04.044 09:10:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:04.044 09:10:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:04.044 09:10:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:04.044 09:10:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:04.044 09:10:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:04.044 09:10:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:04.044 09:10:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:04.044 09:10:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:04.044 09:10:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:04.044 09:10:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:04.044 09:10:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:04.044 09:10:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:04.044 09:10:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:04.302 /dev/nbd0 00:11:04.302 09:10:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:04.302 09:10:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:04.302 09:10:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:04.302 09:10:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:11:04.302 09:10:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:04.302 09:10:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:04.302 09:10:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:04.302 09:10:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:11:04.302 09:10:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:04.302 09:10:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:04.302 09:10:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:04.302 1+0 records in 00:11:04.302 1+0 records out 00:11:04.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303111 s, 13.5 MB/s 00:11:04.302 09:10:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:04.302 09:10:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:11:04.302 09:10:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:04.302 09:10:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:04.302 09:10:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:11:04.302 09:10:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:04.302 09:10:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:04.302 09:10:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:04.560 /dev/nbd1 00:11:04.560 09:10:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:04.560 09:10:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:04.560 09:10:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:04.560 09:10:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:11:04.561 09:10:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:04.561 09:10:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:04.561 09:10:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:04.561 09:10:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:11:04.561 09:10:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:04.561 09:10:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:04.561 09:10:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:04.561 1+0 records in 00:11:04.561 1+0 records out 00:11:04.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581985 s, 7.0 MB/s 00:11:04.561 09:10:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:04.825 09:10:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:11:04.825 09:10:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:04.825 09:10:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:04.825 09:10:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:11:04.825 09:10:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:04.825 09:10:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:04.825 09:10:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:04.825 09:10:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:04.825 09:10:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:05.095 { 00:11:05.095 "nbd_device": "/dev/nbd0", 00:11:05.095 "bdev_name": "Malloc0" 00:11:05.095 }, 00:11:05.095 { 00:11:05.095 "nbd_device": "/dev/nbd1", 00:11:05.095 "bdev_name": "Malloc1" 00:11:05.095 } 00:11:05.095 ]' 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:05.095 { 00:11:05.095 "nbd_device": "/dev/nbd0", 00:11:05.095 "bdev_name": "Malloc0" 00:11:05.095 }, 00:11:05.095 { 00:11:05.095 "nbd_device": "/dev/nbd1", 00:11:05.095 "bdev_name": "Malloc1" 00:11:05.095 } 00:11:05.095 ]' 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:05.095 /dev/nbd1' 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:05.095 /dev/nbd1' 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:05.095 256+0 records in 00:11:05.095 256+0 records out 00:11:05.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00886824 s, 118 MB/s 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:05.095 256+0 records in 00:11:05.095 256+0 records out 00:11:05.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.032303 s, 32.5 MB/s 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:05.095 09:10:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:05.095 256+0 records in 00:11:05.095 256+0 records out 00:11:05.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0418431 s, 25.1 MB/s 00:11:05.095 09:10:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:05.095 09:10:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.095 09:10:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:05.095 09:10:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:05.095 09:10:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:05.095 09:10:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:05.095 09:10:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:05.095 09:10:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:05.095 09:10:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:05.095 09:10:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:05.095 09:10:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:05.355 09:10:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:05.355 09:10:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:05.355 09:10:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.355 09:10:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.355 09:10:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:05.355 09:10:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:05.355 09:10:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:05.355 09:10:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:05.614 09:10:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:05.614 09:10:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:05.614 09:10:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:05.614 09:10:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:05.614 09:10:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:05.614 09:10:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:05.614 09:10:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:05.614 09:10:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:05.614 09:10:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:05.614 09:10:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:05.873 09:10:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:05.873 09:10:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:05.873 09:10:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:05.873 09:10:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:05.873 09:10:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:05.873 09:10:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:05.873 09:10:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:05.873 09:10:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:05.873 09:10:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:05.873 09:10:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.873 09:10:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:06.439 09:10:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:06.439 09:10:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:06.439 09:10:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:06.439 09:10:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:06.439 09:10:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:06.439 09:10:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:06.439 09:10:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:06.439 09:10:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:06.439 09:10:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:06.439 09:10:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:06.439 09:10:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:06.439 09:10:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:06.439 09:10:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:07.006 09:10:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:08.387 [2024-10-15 09:10:51.991495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:08.387 [2024-10-15 09:10:52.149389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.387 [2024-10-15 09:10:52.149400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.644 [2024-10-15 09:10:52.374989] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:08.644 [2024-10-15 09:10:52.375142] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:10.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:10.021 09:10:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58405 /var/tmp/spdk-nbd.sock 00:11:10.021 09:10:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58405 ']' 00:11:10.021 09:10:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:10.021 09:10:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:10.021 09:10:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:10.021 09:10:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:10.021 09:10:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:10.280 09:10:54 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:10.280 09:10:54 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:11:10.280 09:10:54 event.app_repeat -- event/event.sh@39 -- # killprocess 58405 00:11:10.280 09:10:54 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58405 ']' 00:11:10.280 09:10:54 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58405 00:11:10.280 09:10:54 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:11:10.280 09:10:54 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:10.280 09:10:54 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58405 00:11:10.280 killing process with pid 58405 00:11:10.280 09:10:54 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:10.280 09:10:54 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:10.280 09:10:54 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58405' 00:11:10.280 09:10:54 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58405 00:11:10.280 09:10:54 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58405 00:11:11.658 spdk_app_start is called in Round 0. 00:11:11.658 Shutdown signal received, stop current app iteration 00:11:11.658 Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 reinitialization... 00:11:11.658 spdk_app_start is called in Round 1. 00:11:11.658 Shutdown signal received, stop current app iteration 00:11:11.658 Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 reinitialization... 00:11:11.658 spdk_app_start is called in Round 2. 00:11:11.658 Shutdown signal received, stop current app iteration 00:11:11.658 Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 reinitialization... 00:11:11.658 spdk_app_start is called in Round 3. 00:11:11.659 Shutdown signal received, stop current app iteration 00:11:11.659 ************************************ 00:11:11.659 END TEST app_repeat 00:11:11.659 ************************************ 00:11:11.659 09:10:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:11.659 09:10:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:11.659 00:11:11.659 real 0m22.787s 00:11:11.659 user 0m50.178s 00:11:11.659 sys 0m3.594s 00:11:11.659 09:10:55 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.659 09:10:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:11.659 09:10:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:11.659 09:10:55 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:11.659 09:10:55 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:11.659 09:10:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.659 09:10:55 event -- common/autotest_common.sh@10 -- # set +x 00:11:11.659 ************************************ 00:11:11.659 START TEST cpu_locks 00:11:11.659 ************************************ 00:11:11.659 09:10:55 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:11.659 * Looking for test storage... 00:11:11.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:11.659 09:10:55 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:11.659 09:10:55 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:11.659 09:10:55 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:11:11.659 09:10:55 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.659 09:10:55 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:11:11.659 09:10:55 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.659 09:10:55 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:11.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.659 --rc genhtml_branch_coverage=1 00:11:11.659 --rc genhtml_function_coverage=1 00:11:11.659 --rc genhtml_legend=1 00:11:11.659 --rc geninfo_all_blocks=1 00:11:11.659 --rc geninfo_unexecuted_blocks=1 00:11:11.659 00:11:11.659 ' 00:11:11.659 09:10:55 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:11.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.659 --rc genhtml_branch_coverage=1 00:11:11.659 --rc genhtml_function_coverage=1 00:11:11.659 --rc genhtml_legend=1 00:11:11.659 --rc geninfo_all_blocks=1 00:11:11.659 --rc geninfo_unexecuted_blocks=1 00:11:11.659 00:11:11.659 ' 00:11:11.659 09:10:55 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:11.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.659 --rc genhtml_branch_coverage=1 00:11:11.659 --rc genhtml_function_coverage=1 00:11:11.659 --rc genhtml_legend=1 00:11:11.659 --rc geninfo_all_blocks=1 00:11:11.659 --rc geninfo_unexecuted_blocks=1 00:11:11.659 00:11:11.659 ' 00:11:11.659 09:10:55 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:11.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.659 --rc genhtml_branch_coverage=1 00:11:11.659 --rc genhtml_function_coverage=1 00:11:11.659 --rc genhtml_legend=1 00:11:11.659 --rc geninfo_all_blocks=1 00:11:11.659 --rc geninfo_unexecuted_blocks=1 00:11:11.659 00:11:11.659 ' 00:11:11.659 09:10:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:11.659 09:10:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:11.659 09:10:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:11.659 09:10:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:11.659 09:10:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:11.659 09:10:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.659 09:10:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:11.659 ************************************ 00:11:11.659 START TEST default_locks 00:11:11.659 ************************************ 00:11:11.659 09:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:11:11.659 09:10:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58896 00:11:11.659 09:10:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58896 00:11:11.659 09:10:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:11.659 09:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58896 ']' 00:11:11.659 09:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.659 09:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:11.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.659 09:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.659 09:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:11.659 09:10:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:11.918 [2024-10-15 09:10:55.654436] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:11:11.918 [2024-10-15 09:10:55.654622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58896 ] 00:11:11.918 [2024-10-15 09:10:55.831615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.176 [2024-10-15 09:10:56.045947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.550 09:10:57 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:13.550 09:10:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:11:13.550 09:10:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58896 00:11:13.551 09:10:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58896 00:11:13.551 09:10:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:13.810 09:10:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58896 00:11:13.810 09:10:57 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58896 ']' 00:11:13.810 09:10:57 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58896 00:11:13.810 09:10:57 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:11:13.810 09:10:57 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:13.810 09:10:57 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58896 00:11:13.810 killing process with pid 58896 00:11:13.810 09:10:57 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:13.810 09:10:57 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:13.810 09:10:57 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58896' 00:11:13.810 09:10:57 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58896 00:11:13.810 09:10:57 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58896 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58896 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58896 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58896 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58896 ']' 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:16.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:16.414 ERROR: process (pid: 58896) is no longer running 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:16.414 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58896) - No such process 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:16.414 00:11:16.414 real 0m4.696s 00:11:16.414 user 0m4.725s 00:11:16.414 sys 0m0.829s 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.414 09:11:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:16.414 ************************************ 00:11:16.414 END TEST default_locks 00:11:16.414 ************************************ 00:11:16.414 09:11:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:16.414 09:11:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:16.414 09:11:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.414 09:11:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:16.414 ************************************ 00:11:16.414 START TEST default_locks_via_rpc 00:11:16.414 ************************************ 00:11:16.414 09:11:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:11:16.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.414 09:11:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58982 00:11:16.414 09:11:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58982 00:11:16.414 09:11:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58982 ']' 00:11:16.414 09:11:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:16.414 09:11:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.414 09:11:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:16.415 09:11:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.415 09:11:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:16.415 09:11:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.674 [2024-10-15 09:11:00.421682] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:11:16.674 [2024-10-15 09:11:00.421895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58982 ] 00:11:16.934 [2024-10-15 09:11:00.605204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.934 [2024-10-15 09:11:00.766521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.312 09:11:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:18.312 09:11:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:18.312 09:11:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:18.312 09:11:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.312 09:11:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.312 09:11:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.312 09:11:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:18.312 09:11:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:18.312 09:11:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:18.312 09:11:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:18.312 09:11:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:18.312 09:11:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.312 09:11:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.313 09:11:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.313 09:11:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58982 00:11:18.313 09:11:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58982 00:11:18.313 09:11:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:18.571 09:11:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58982 00:11:18.571 09:11:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58982 ']' 00:11:18.571 09:11:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58982 00:11:18.571 09:11:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:11:18.571 09:11:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:18.571 09:11:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58982 00:11:18.571 killing process with pid 58982 00:11:18.571 09:11:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:18.571 09:11:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:18.571 09:11:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58982' 00:11:18.571 09:11:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58982 00:11:18.571 09:11:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58982 00:11:21.104 ************************************ 00:11:21.104 END TEST default_locks_via_rpc 00:11:21.104 ************************************ 00:11:21.104 00:11:21.104 real 0m4.572s 00:11:21.104 user 0m4.607s 00:11:21.104 sys 0m0.931s 00:11:21.104 09:11:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.104 09:11:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.104 09:11:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:21.104 09:11:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:21.104 09:11:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.104 09:11:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:21.104 ************************************ 00:11:21.104 START TEST non_locking_app_on_locked_coremask 00:11:21.104 ************************************ 00:11:21.104 09:11:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:11:21.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.104 09:11:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59056 00:11:21.104 09:11:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59056 /var/tmp/spdk.sock 00:11:21.104 09:11:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:21.104 09:11:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59056 ']' 00:11:21.104 09:11:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.104 09:11:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.104 09:11:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.104 09:11:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.104 09:11:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:21.363 [2024-10-15 09:11:05.040414] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:11:21.363 [2024-10-15 09:11:05.040912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59056 ] 00:11:21.363 [2024-10-15 09:11:05.226841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.623 [2024-10-15 09:11:05.410941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.564 09:11:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:22.564 09:11:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:22.564 09:11:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59078 00:11:22.564 09:11:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:22.564 09:11:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59078 /var/tmp/spdk2.sock 00:11:22.564 09:11:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59078 ']' 00:11:22.564 09:11:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:22.564 09:11:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:22.564 09:11:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:22.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:22.564 09:11:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:22.564 09:11:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:22.823 [2024-10-15 09:11:06.538618] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:11:22.823 [2024-10-15 09:11:06.539180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59078 ] 00:11:22.823 [2024-10-15 09:11:06.729079] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:22.823 [2024-10-15 09:11:06.729214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.392 [2024-10-15 09:11:07.024457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.925 09:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:25.925 09:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:25.925 09:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59056 00:11:25.925 09:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59056 00:11:25.925 09:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:26.493 09:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59056 00:11:26.493 09:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59056 ']' 00:11:26.493 09:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59056 00:11:26.493 09:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:26.493 09:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.493 09:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59056 00:11:26.493 killing process with pid 59056 00:11:26.493 09:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:26.493 09:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:26.493 09:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59056' 00:11:26.493 09:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59056 00:11:26.493 09:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59056 00:11:31.758 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59078 00:11:31.758 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59078 ']' 00:11:31.758 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59078 00:11:31.758 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:31.758 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:31.758 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59078 00:11:32.016 killing process with pid 59078 00:11:32.016 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:32.016 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:32.016 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59078' 00:11:32.016 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59078 00:11:32.016 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59078 00:11:34.600 00:11:34.600 real 0m13.339s 00:11:34.600 user 0m13.609s 00:11:34.600 sys 0m1.784s 00:11:34.600 09:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:34.600 09:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:34.600 ************************************ 00:11:34.600 END TEST non_locking_app_on_locked_coremask 00:11:34.600 ************************************ 00:11:34.600 09:11:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:34.600 09:11:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:34.600 09:11:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:34.600 09:11:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:34.600 ************************************ 00:11:34.600 START TEST locking_app_on_unlocked_coremask 00:11:34.600 ************************************ 00:11:34.600 09:11:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:11:34.600 09:11:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59242 00:11:34.600 09:11:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59242 /var/tmp/spdk.sock 00:11:34.600 09:11:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59242 ']' 00:11:34.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.600 09:11:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.600 09:11:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:34.600 09:11:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.600 09:11:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.600 09:11:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.600 09:11:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:34.600 [2024-10-15 09:11:18.428994] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:11:34.600 [2024-10-15 09:11:18.429249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59242 ] 00:11:34.858 [2024-10-15 09:11:18.606106] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:34.858 [2024-10-15 09:11:18.606429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.858 [2024-10-15 09:11:18.762310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:36.235 09:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:36.235 09:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:36.235 09:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59264 00:11:36.235 09:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59264 /var/tmp/spdk2.sock 00:11:36.235 09:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:36.235 09:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59264 ']' 00:11:36.235 09:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:36.235 09:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:36.235 09:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:36.235 09:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:36.235 09:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:36.235 [2024-10-15 09:11:19.859843] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:11:36.235 [2024-10-15 09:11:19.860280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59264 ] 00:11:36.235 [2024-10-15 09:11:20.040882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.494 [2024-10-15 09:11:20.379053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.033 09:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:39.033 09:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:39.033 09:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59264 00:11:39.033 09:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59264 00:11:39.033 09:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:39.966 09:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59242 00:11:39.966 09:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59242 ']' 00:11:39.966 09:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59242 00:11:39.966 09:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:39.966 09:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:39.966 09:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59242 00:11:39.966 killing process with pid 59242 00:11:39.966 09:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:39.966 09:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:39.966 09:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59242' 00:11:39.966 09:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59242 00:11:39.966 09:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59242 00:11:45.235 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59264 00:11:45.235 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59264 ']' 00:11:45.235 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59264 00:11:45.235 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:45.235 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:45.235 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59264 00:11:45.235 killing process with pid 59264 00:11:45.235 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:45.235 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:45.235 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59264' 00:11:45.235 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59264 00:11:45.235 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59264 00:11:47.766 00:11:47.766 real 0m12.883s 00:11:47.766 user 0m13.180s 00:11:47.766 sys 0m1.791s 00:11:47.766 09:11:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:47.766 ************************************ 00:11:47.766 END TEST locking_app_on_unlocked_coremask 00:11:47.766 09:11:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:47.766 ************************************ 00:11:47.766 09:11:31 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:47.766 09:11:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:47.766 09:11:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:47.766 09:11:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:47.766 ************************************ 00:11:47.766 START TEST locking_app_on_locked_coremask 00:11:47.766 ************************************ 00:11:47.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.766 09:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:11:47.766 09:11:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59423 00:11:47.766 09:11:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59423 /var/tmp/spdk.sock 00:11:47.766 09:11:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:47.766 09:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59423 ']' 00:11:47.766 09:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.766 09:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:47.766 09:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.766 09:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:47.766 09:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:47.766 [2024-10-15 09:11:31.377140] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:11:47.766 [2024-10-15 09:11:31.377339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59423 ] 00:11:47.766 [2024-10-15 09:11:31.548600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.024 [2024-10-15 09:11:31.696803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.961 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:48.961 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:48.961 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59439 00:11:48.961 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:48.961 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59439 /var/tmp/spdk2.sock 00:11:48.961 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:11:48.961 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59439 /var/tmp/spdk2.sock 00:11:48.961 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:48.961 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:48.961 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:48.961 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:48.961 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59439 /var/tmp/spdk2.sock 00:11:48.962 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59439 ']' 00:11:48.962 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:48.962 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:48.962 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:48.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:48.962 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:48.962 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:48.962 [2024-10-15 09:11:32.801769] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:11:48.962 [2024-10-15 09:11:32.802071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59439 ] 00:11:49.221 [2024-10-15 09:11:33.001638] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59423 has claimed it. 00:11:49.221 [2024-10-15 09:11:33.001737] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:49.788 ERROR: process (pid: 59439) is no longer running 00:11:49.788 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59439) - No such process 00:11:49.788 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:49.788 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:11:49.788 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:11:49.788 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:49.788 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:49.788 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:49.788 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59423 00:11:49.788 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59423 00:11:49.788 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:50.047 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59423 00:11:50.047 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59423 ']' 00:11:50.047 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59423 00:11:50.047 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:50.306 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:50.306 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59423 00:11:50.306 killing process with pid 59423 00:11:50.306 09:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:50.306 09:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:50.306 09:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59423' 00:11:50.306 09:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59423 00:11:50.306 09:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59423 00:11:52.838 00:11:52.838 real 0m5.184s 00:11:52.838 user 0m5.414s 00:11:52.838 sys 0m1.052s 00:11:52.838 09:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:52.838 ************************************ 00:11:52.838 END TEST locking_app_on_locked_coremask 00:11:52.838 ************************************ 00:11:52.838 09:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:52.838 09:11:36 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:52.838 09:11:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:52.838 09:11:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:52.838 09:11:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:52.838 ************************************ 00:11:52.838 START TEST locking_overlapped_coremask 00:11:52.838 ************************************ 00:11:52.838 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:11:52.838 09:11:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59514 00:11:52.838 09:11:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:52.838 09:11:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59514 /var/tmp/spdk.sock 00:11:52.838 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59514 ']' 00:11:52.838 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.838 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:52.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.838 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.838 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:52.838 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:52.838 [2024-10-15 09:11:36.585590] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:11:52.838 [2024-10-15 09:11:36.586451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59514 ] 00:11:53.097 [2024-10-15 09:11:36.771766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:53.097 [2024-10-15 09:11:36.954703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.097 [2024-10-15 09:11:36.954831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.097 [2024-10-15 09:11:36.954846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.033 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:54.033 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:54.033 09:11:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59532 00:11:54.033 09:11:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59532 /var/tmp/spdk2.sock 00:11:54.033 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:11:54.033 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59532 /var/tmp/spdk2.sock 00:11:54.033 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:54.033 09:11:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:54.033 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:54.033 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:54.033 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:54.033 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59532 /var/tmp/spdk2.sock 00:11:54.033 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59532 ']' 00:11:54.033 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:54.033 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:54.033 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:54.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:54.033 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:54.033 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:54.291 [2024-10-15 09:11:38.055078] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:11:54.291 [2024-10-15 09:11:38.055283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59532 ] 00:11:54.610 [2024-10-15 09:11:38.242238] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59514 has claimed it. 00:11:54.610 [2024-10-15 09:11:38.242324] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:54.875 ERROR: process (pid: 59532) is no longer running 00:11:54.875 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59532) - No such process 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59514 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59514 ']' 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59514 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59514 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59514' 00:11:54.875 killing process with pid 59514 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59514 00:11:54.875 09:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59514 00:11:57.408 00:11:57.408 real 0m4.720s 00:11:57.408 user 0m12.637s 00:11:57.408 sys 0m0.851s 00:11:57.408 ************************************ 00:11:57.408 END TEST locking_overlapped_coremask 00:11:57.408 ************************************ 00:11:57.408 09:11:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.408 09:11:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:57.408 09:11:41 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:57.408 09:11:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:57.408 09:11:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.408 09:11:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:57.408 ************************************ 00:11:57.408 START TEST locking_overlapped_coremask_via_rpc 00:11:57.408 ************************************ 00:11:57.408 09:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:11:57.408 09:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59605 00:11:57.408 09:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59605 /var/tmp/spdk.sock 00:11:57.408 09:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:57.408 09:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59605 ']' 00:11:57.408 09:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.408 09:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.408 09:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.408 09:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.408 09:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.667 [2024-10-15 09:11:41.342013] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:11:57.667 [2024-10-15 09:11:41.342244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59605 ] 00:11:57.667 [2024-10-15 09:11:41.512996] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:57.667 [2024-10-15 09:11:41.513080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:57.927 [2024-10-15 09:11:41.692418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.927 [2024-10-15 09:11:41.692541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.927 [2024-10-15 09:11:41.692543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.862 09:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:58.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:58.862 09:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:58.862 09:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59625 00:11:58.862 09:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:58.862 09:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59625 /var/tmp/spdk2.sock 00:11:58.862 09:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59625 ']' 00:11:58.862 09:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:58.862 09:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:58.862 09:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:58.862 09:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:58.862 09:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.862 [2024-10-15 09:11:42.776575] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:11:58.862 [2024-10-15 09:11:42.777031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59625 ] 00:11:59.120 [2024-10-15 09:11:42.957358] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:59.120 [2024-10-15 09:11:42.957422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:59.429 [2024-10-15 09:11:43.262517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.429 [2024-10-15 09:11:43.266285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.429 [2024-10-15 09:11:43.266311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.959 [2024-10-15 09:11:45.589424] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59605 has claimed it. 00:12:01.959 request: 00:12:01.959 { 00:12:01.959 "method": "framework_enable_cpumask_locks", 00:12:01.959 "req_id": 1 00:12:01.959 } 00:12:01.959 Got JSON-RPC error response 00:12:01.959 response: 00:12:01.959 { 00:12:01.959 "code": -32603, 00:12:01.959 "message": "Failed to claim CPU core: 2" 00:12:01.959 } 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59605 /var/tmp/spdk.sock 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59605 ']' 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:01.959 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:02.217 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:02.217 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:02.217 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59625 /var/tmp/spdk2.sock 00:12:02.217 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59625 ']' 00:12:02.217 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:02.217 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:02.217 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:02.217 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:02.217 09:11:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.474 09:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:02.474 09:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:02.474 09:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:12:02.474 09:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:02.474 09:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:02.474 ************************************ 00:12:02.474 END TEST locking_overlapped_coremask_via_rpc 00:12:02.475 ************************************ 00:12:02.475 09:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:02.475 00:12:02.475 real 0m4.990s 00:12:02.475 user 0m1.832s 00:12:02.475 sys 0m0.262s 00:12:02.475 09:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:02.475 09:11:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.475 09:11:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:12:02.475 09:11:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59605 ]] 00:12:02.475 09:11:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59605 00:12:02.475 09:11:46 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59605 ']' 00:12:02.475 09:11:46 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59605 00:12:02.475 09:11:46 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:12:02.475 09:11:46 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:02.475 09:11:46 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59605 00:12:02.475 09:11:46 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:02.475 09:11:46 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:02.475 09:11:46 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59605' 00:12:02.475 killing process with pid 59605 00:12:02.475 09:11:46 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59605 00:12:02.475 09:11:46 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59605 00:12:05.004 09:11:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59625 ]] 00:12:05.004 09:11:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59625 00:12:05.004 09:11:48 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59625 ']' 00:12:05.004 09:11:48 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59625 00:12:05.004 09:11:48 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:12:05.004 09:11:48 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:05.004 09:11:48 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59625 00:12:05.004 killing process with pid 59625 00:12:05.004 09:11:48 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:12:05.004 09:11:48 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:12:05.004 09:11:48 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59625' 00:12:05.004 09:11:48 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59625 00:12:05.004 09:11:48 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59625 00:12:07.534 09:11:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:07.534 Process with pid 59605 is not found 00:12:07.534 Process with pid 59625 is not found 00:12:07.534 09:11:51 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:12:07.534 09:11:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59605 ]] 00:12:07.534 09:11:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59605 00:12:07.534 09:11:51 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59605 ']' 00:12:07.534 09:11:51 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59605 00:12:07.534 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59605) - No such process 00:12:07.534 09:11:51 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59605 is not found' 00:12:07.534 09:11:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59625 ]] 00:12:07.534 09:11:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59625 00:12:07.534 09:11:51 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59625 ']' 00:12:07.534 09:11:51 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59625 00:12:07.534 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59625) - No such process 00:12:07.534 09:11:51 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59625 is not found' 00:12:07.534 09:11:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:07.534 00:12:07.534 real 0m55.876s 00:12:07.534 user 1m33.868s 00:12:07.534 sys 0m8.956s 00:12:07.534 09:11:51 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.534 09:11:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:07.534 ************************************ 00:12:07.534 END TEST cpu_locks 00:12:07.534 ************************************ 00:12:07.534 00:12:07.534 real 1m30.048s 00:12:07.534 user 2m41.773s 00:12:07.534 sys 0m13.749s 00:12:07.534 09:11:51 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.534 09:11:51 event -- common/autotest_common.sh@10 -- # set +x 00:12:07.534 ************************************ 00:12:07.534 END TEST event 00:12:07.534 ************************************ 00:12:07.534 09:11:51 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:07.534 09:11:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:07.534 09:11:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.534 09:11:51 -- common/autotest_common.sh@10 -- # set +x 00:12:07.534 ************************************ 00:12:07.534 START TEST thread 00:12:07.534 ************************************ 00:12:07.534 09:11:51 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:07.534 * Looking for test storage... 00:12:07.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:07.534 09:11:51 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:07.534 09:11:51 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:12:07.534 09:11:51 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:07.534 09:11:51 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:07.534 09:11:51 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.534 09:11:51 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.534 09:11:51 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.534 09:11:51 thread -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.534 09:11:51 thread -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.534 09:11:51 thread -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.534 09:11:51 thread -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.534 09:11:51 thread -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.534 09:11:51 thread -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.534 09:11:51 thread -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.534 09:11:51 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.534 09:11:51 thread -- scripts/common.sh@344 -- # case "$op" in 00:12:07.534 09:11:51 thread -- scripts/common.sh@345 -- # : 1 00:12:07.534 09:11:51 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.534 09:11:51 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.534 09:11:51 thread -- scripts/common.sh@365 -- # decimal 1 00:12:07.534 09:11:51 thread -- scripts/common.sh@353 -- # local d=1 00:12:07.534 09:11:51 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.534 09:11:51 thread -- scripts/common.sh@355 -- # echo 1 00:12:07.534 09:11:51 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.534 09:11:51 thread -- scripts/common.sh@366 -- # decimal 2 00:12:07.534 09:11:51 thread -- scripts/common.sh@353 -- # local d=2 00:12:07.534 09:11:51 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.534 09:11:51 thread -- scripts/common.sh@355 -- # echo 2 00:12:07.793 09:11:51 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.793 09:11:51 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.793 09:11:51 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.793 09:11:51 thread -- scripts/common.sh@368 -- # return 0 00:12:07.793 09:11:51 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.793 09:11:51 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:07.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.793 --rc genhtml_branch_coverage=1 00:12:07.793 --rc genhtml_function_coverage=1 00:12:07.793 --rc genhtml_legend=1 00:12:07.793 --rc geninfo_all_blocks=1 00:12:07.793 --rc geninfo_unexecuted_blocks=1 00:12:07.793 00:12:07.793 ' 00:12:07.793 09:11:51 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:07.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.793 --rc genhtml_branch_coverage=1 00:12:07.793 --rc genhtml_function_coverage=1 00:12:07.793 --rc genhtml_legend=1 00:12:07.793 --rc geninfo_all_blocks=1 00:12:07.793 --rc geninfo_unexecuted_blocks=1 00:12:07.793 00:12:07.793 ' 00:12:07.793 09:11:51 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:07.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.793 --rc genhtml_branch_coverage=1 00:12:07.793 --rc genhtml_function_coverage=1 00:12:07.793 --rc genhtml_legend=1 00:12:07.793 --rc geninfo_all_blocks=1 00:12:07.793 --rc geninfo_unexecuted_blocks=1 00:12:07.793 00:12:07.793 ' 00:12:07.793 09:11:51 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:07.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.793 --rc genhtml_branch_coverage=1 00:12:07.793 --rc genhtml_function_coverage=1 00:12:07.793 --rc genhtml_legend=1 00:12:07.793 --rc geninfo_all_blocks=1 00:12:07.793 --rc geninfo_unexecuted_blocks=1 00:12:07.793 00:12:07.793 ' 00:12:07.793 09:11:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:07.793 09:11:51 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:12:07.793 09:11:51 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.793 09:11:51 thread -- common/autotest_common.sh@10 -- # set +x 00:12:07.793 ************************************ 00:12:07.793 START TEST thread_poller_perf 00:12:07.793 ************************************ 00:12:07.793 09:11:51 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:07.793 [2024-10-15 09:11:51.523886] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:12:07.793 [2024-10-15 09:11:51.524312] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59826 ] 00:12:07.793 [2024-10-15 09:11:51.705638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.052 [2024-10-15 09:11:51.887995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.052 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:09.430 [2024-10-15T09:11:53.358Z] ====================================== 00:12:09.430 [2024-10-15T09:11:53.358Z] busy:2217186676 (cyc) 00:12:09.430 [2024-10-15T09:11:53.358Z] total_run_count: 288000 00:12:09.430 [2024-10-15T09:11:53.358Z] tsc_hz: 2200000000 (cyc) 00:12:09.430 [2024-10-15T09:11:53.358Z] ====================================== 00:12:09.430 [2024-10-15T09:11:53.358Z] poller_cost: 7698 (cyc), 3499 (nsec) 00:12:09.430 00:12:09.430 real 0m1.683s 00:12:09.430 user 0m1.455s 00:12:09.430 sys 0m0.117s 00:12:09.430 09:11:53 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.430 ************************************ 00:12:09.430 END TEST thread_poller_perf 00:12:09.430 ************************************ 00:12:09.430 09:11:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:09.430 09:11:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:09.430 09:11:53 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:12:09.430 09:11:53 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:09.430 09:11:53 thread -- common/autotest_common.sh@10 -- # set +x 00:12:09.430 ************************************ 00:12:09.430 START TEST thread_poller_perf 00:12:09.430 ************************************ 00:12:09.430 09:11:53 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:09.430 [2024-10-15 09:11:53.270273] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:12:09.430 [2024-10-15 09:11:53.270489] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59862 ] 00:12:09.689 [2024-10-15 09:11:53.450712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.689 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:09.689 [2024-10-15 09:11:53.605331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.065 [2024-10-15T09:11:54.993Z] ====================================== 00:12:11.065 [2024-10-15T09:11:54.993Z] busy:2204420163 (cyc) 00:12:11.065 [2024-10-15T09:11:54.993Z] total_run_count: 3645000 00:12:11.065 [2024-10-15T09:11:54.993Z] tsc_hz: 2200000000 (cyc) 00:12:11.065 [2024-10-15T09:11:54.993Z] ====================================== 00:12:11.065 [2024-10-15T09:11:54.993Z] poller_cost: 604 (cyc), 274 (nsec) 00:12:11.065 00:12:11.065 real 0m1.641s 00:12:11.065 user 0m1.401s 00:12:11.065 sys 0m0.130s 00:12:11.065 09:11:54 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.065 ************************************ 00:12:11.065 END TEST thread_poller_perf 00:12:11.065 ************************************ 00:12:11.065 09:11:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:11.065 09:11:54 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:12:11.065 ************************************ 00:12:11.065 END TEST thread 00:12:11.065 ************************************ 00:12:11.065 00:12:11.065 real 0m3.613s 00:12:11.065 user 0m2.996s 00:12:11.065 sys 0m0.386s 00:12:11.065 09:11:54 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.065 09:11:54 thread -- common/autotest_common.sh@10 -- # set +x 00:12:11.065 09:11:54 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:12:11.065 09:11:54 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:11.065 09:11:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:11.065 09:11:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:11.065 09:11:54 -- common/autotest_common.sh@10 -- # set +x 00:12:11.065 ************************************ 00:12:11.065 START TEST app_cmdline 00:12:11.065 ************************************ 00:12:11.065 09:11:54 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:11.324 * Looking for test storage... 00:12:11.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:11.324 09:11:55 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:11.324 09:11:55 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:12:11.324 09:11:55 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:11.324 09:11:55 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@345 -- # : 1 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:12:11.324 09:11:55 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.325 09:11:55 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:12:11.325 09:11:55 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:12:11.325 09:11:55 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.325 09:11:55 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:12:11.325 09:11:55 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.325 09:11:55 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.325 09:11:55 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.325 09:11:55 app_cmdline -- scripts/common.sh@368 -- # return 0 00:12:11.325 09:11:55 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.325 09:11:55 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:11.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.325 --rc genhtml_branch_coverage=1 00:12:11.325 --rc genhtml_function_coverage=1 00:12:11.325 --rc genhtml_legend=1 00:12:11.325 --rc geninfo_all_blocks=1 00:12:11.325 --rc geninfo_unexecuted_blocks=1 00:12:11.325 00:12:11.325 ' 00:12:11.325 09:11:55 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:11.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.325 --rc genhtml_branch_coverage=1 00:12:11.325 --rc genhtml_function_coverage=1 00:12:11.325 --rc genhtml_legend=1 00:12:11.325 --rc geninfo_all_blocks=1 00:12:11.325 --rc geninfo_unexecuted_blocks=1 00:12:11.325 00:12:11.325 ' 00:12:11.325 09:11:55 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:11.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.325 --rc genhtml_branch_coverage=1 00:12:11.325 --rc genhtml_function_coverage=1 00:12:11.325 --rc genhtml_legend=1 00:12:11.325 --rc geninfo_all_blocks=1 00:12:11.325 --rc geninfo_unexecuted_blocks=1 00:12:11.325 00:12:11.325 ' 00:12:11.325 09:11:55 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:11.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.325 --rc genhtml_branch_coverage=1 00:12:11.325 --rc genhtml_function_coverage=1 00:12:11.325 --rc genhtml_legend=1 00:12:11.325 --rc geninfo_all_blocks=1 00:12:11.325 --rc geninfo_unexecuted_blocks=1 00:12:11.325 00:12:11.325 ' 00:12:11.325 09:11:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:11.325 09:11:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59951 00:12:11.325 09:11:55 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:11.325 09:11:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59951 00:12:11.325 09:11:55 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59951 ']' 00:12:11.325 09:11:55 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.325 09:11:55 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:11.325 09:11:55 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.325 09:11:55 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:11.325 09:11:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:11.583 [2024-10-15 09:11:55.281274] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:12:11.583 [2024-10-15 09:11:55.281467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59951 ] 00:12:11.583 [2024-10-15 09:11:55.463571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.842 [2024-10-15 09:11:55.637843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.819 09:11:56 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:12.819 09:11:56 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:12:12.819 09:11:56 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:13.078 { 00:12:13.078 "version": "SPDK v25.01-pre git sha1 aa3f30c36", 00:12:13.078 "fields": { 00:12:13.078 "major": 25, 00:12:13.078 "minor": 1, 00:12:13.078 "patch": 0, 00:12:13.078 "suffix": "-pre", 00:12:13.078 "commit": "aa3f30c36" 00:12:13.078 } 00:12:13.078 } 00:12:13.078 09:11:56 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:12:13.078 09:11:56 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:13.078 09:11:56 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:13.078 09:11:56 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:13.078 09:11:56 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:13.078 09:11:56 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:13.078 09:11:56 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.078 09:11:56 app_cmdline -- app/cmdline.sh@26 -- # sort 00:12:13.078 09:11:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:13.078 09:11:56 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.078 09:11:56 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:13.078 09:11:56 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:13.078 09:11:56 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:13.078 09:11:56 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:12:13.078 09:11:56 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:13.078 09:11:56 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:13.078 09:11:56 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:13.078 09:11:56 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:13.078 09:11:56 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:13.078 09:11:56 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:13.078 09:11:56 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:13.078 09:11:56 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:13.078 09:11:56 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:13.078 09:11:56 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:13.337 request: 00:12:13.337 { 00:12:13.337 "method": "env_dpdk_get_mem_stats", 00:12:13.337 "req_id": 1 00:12:13.337 } 00:12:13.337 Got JSON-RPC error response 00:12:13.337 response: 00:12:13.337 { 00:12:13.337 "code": -32601, 00:12:13.337 "message": "Method not found" 00:12:13.337 } 00:12:13.337 09:11:57 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:12:13.337 09:11:57 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:13.337 09:11:57 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:13.337 09:11:57 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:13.337 09:11:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59951 00:12:13.337 09:11:57 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59951 ']' 00:12:13.337 09:11:57 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59951 00:12:13.337 09:11:57 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:12:13.337 09:11:57 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:13.337 09:11:57 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59951 00:12:13.596 09:11:57 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:13.596 09:11:57 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:13.596 killing process with pid 59951 00:12:13.596 09:11:57 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59951' 00:12:13.596 09:11:57 app_cmdline -- common/autotest_common.sh@969 -- # kill 59951 00:12:13.596 09:11:57 app_cmdline -- common/autotest_common.sh@974 -- # wait 59951 00:12:16.128 00:12:16.128 real 0m4.619s 00:12:16.128 user 0m4.941s 00:12:16.128 sys 0m0.802s 00:12:16.128 09:11:59 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:16.128 09:11:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:16.128 ************************************ 00:12:16.128 END TEST app_cmdline 00:12:16.128 ************************************ 00:12:16.128 09:11:59 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:16.128 09:11:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:16.128 09:11:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.128 09:11:59 -- common/autotest_common.sh@10 -- # set +x 00:12:16.128 ************************************ 00:12:16.128 START TEST version 00:12:16.128 ************************************ 00:12:16.128 09:11:59 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:16.128 * Looking for test storage... 00:12:16.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:16.128 09:11:59 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:16.128 09:11:59 version -- common/autotest_common.sh@1691 -- # lcov --version 00:12:16.128 09:11:59 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:16.128 09:11:59 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:16.128 09:11:59 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.128 09:11:59 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.128 09:11:59 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.128 09:11:59 version -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.128 09:11:59 version -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.128 09:11:59 version -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.128 09:11:59 version -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.128 09:11:59 version -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.128 09:11:59 version -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.128 09:11:59 version -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.128 09:11:59 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.128 09:11:59 version -- scripts/common.sh@344 -- # case "$op" in 00:12:16.128 09:11:59 version -- scripts/common.sh@345 -- # : 1 00:12:16.128 09:11:59 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.128 09:11:59 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.128 09:11:59 version -- scripts/common.sh@365 -- # decimal 1 00:12:16.128 09:11:59 version -- scripts/common.sh@353 -- # local d=1 00:12:16.128 09:11:59 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.128 09:11:59 version -- scripts/common.sh@355 -- # echo 1 00:12:16.128 09:11:59 version -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.128 09:11:59 version -- scripts/common.sh@366 -- # decimal 2 00:12:16.128 09:11:59 version -- scripts/common.sh@353 -- # local d=2 00:12:16.128 09:11:59 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.128 09:11:59 version -- scripts/common.sh@355 -- # echo 2 00:12:16.128 09:11:59 version -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.128 09:11:59 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.128 09:11:59 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.128 09:11:59 version -- scripts/common.sh@368 -- # return 0 00:12:16.128 09:11:59 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.128 09:11:59 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:16.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.128 --rc genhtml_branch_coverage=1 00:12:16.128 --rc genhtml_function_coverage=1 00:12:16.128 --rc genhtml_legend=1 00:12:16.128 --rc geninfo_all_blocks=1 00:12:16.128 --rc geninfo_unexecuted_blocks=1 00:12:16.128 00:12:16.128 ' 00:12:16.128 09:11:59 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:16.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.128 --rc genhtml_branch_coverage=1 00:12:16.128 --rc genhtml_function_coverage=1 00:12:16.128 --rc genhtml_legend=1 00:12:16.128 --rc geninfo_all_blocks=1 00:12:16.128 --rc geninfo_unexecuted_blocks=1 00:12:16.128 00:12:16.128 ' 00:12:16.128 09:11:59 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:16.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.128 --rc genhtml_branch_coverage=1 00:12:16.128 --rc genhtml_function_coverage=1 00:12:16.128 --rc genhtml_legend=1 00:12:16.128 --rc geninfo_all_blocks=1 00:12:16.128 --rc geninfo_unexecuted_blocks=1 00:12:16.128 00:12:16.129 ' 00:12:16.129 09:11:59 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:16.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.129 --rc genhtml_branch_coverage=1 00:12:16.129 --rc genhtml_function_coverage=1 00:12:16.129 --rc genhtml_legend=1 00:12:16.129 --rc geninfo_all_blocks=1 00:12:16.129 --rc geninfo_unexecuted_blocks=1 00:12:16.129 00:12:16.129 ' 00:12:16.129 09:11:59 version -- app/version.sh@17 -- # get_header_version major 00:12:16.129 09:11:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:16.129 09:11:59 version -- app/version.sh@14 -- # cut -f2 00:12:16.129 09:11:59 version -- app/version.sh@14 -- # tr -d '"' 00:12:16.129 09:11:59 version -- app/version.sh@17 -- # major=25 00:12:16.129 09:11:59 version -- app/version.sh@18 -- # get_header_version minor 00:12:16.129 09:11:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:16.129 09:11:59 version -- app/version.sh@14 -- # cut -f2 00:12:16.129 09:11:59 version -- app/version.sh@14 -- # tr -d '"' 00:12:16.129 09:11:59 version -- app/version.sh@18 -- # minor=1 00:12:16.129 09:11:59 version -- app/version.sh@19 -- # get_header_version patch 00:12:16.129 09:11:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:16.129 09:11:59 version -- app/version.sh@14 -- # cut -f2 00:12:16.129 09:11:59 version -- app/version.sh@14 -- # tr -d '"' 00:12:16.129 09:11:59 version -- app/version.sh@19 -- # patch=0 00:12:16.129 09:11:59 version -- app/version.sh@20 -- # get_header_version suffix 00:12:16.129 09:11:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:16.129 09:11:59 version -- app/version.sh@14 -- # cut -f2 00:12:16.129 09:11:59 version -- app/version.sh@14 -- # tr -d '"' 00:12:16.129 09:11:59 version -- app/version.sh@20 -- # suffix=-pre 00:12:16.129 09:11:59 version -- app/version.sh@22 -- # version=25.1 00:12:16.129 09:11:59 version -- app/version.sh@25 -- # (( patch != 0 )) 00:12:16.129 09:11:59 version -- app/version.sh@28 -- # version=25.1rc0 00:12:16.129 09:11:59 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:16.129 09:11:59 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:16.129 09:11:59 version -- app/version.sh@30 -- # py_version=25.1rc0 00:12:16.129 09:11:59 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:12:16.129 ************************************ 00:12:16.129 END TEST version 00:12:16.129 ************************************ 00:12:16.129 00:12:16.129 real 0m0.257s 00:12:16.129 user 0m0.176s 00:12:16.129 sys 0m0.119s 00:12:16.129 09:11:59 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:16.129 09:11:59 version -- common/autotest_common.sh@10 -- # set +x 00:12:16.129 09:11:59 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:12:16.129 09:11:59 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:12:16.129 09:11:59 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:12:16.129 09:11:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:16.129 09:11:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.129 09:11:59 -- common/autotest_common.sh@10 -- # set +x 00:12:16.129 ************************************ 00:12:16.129 START TEST bdev_raid 00:12:16.129 ************************************ 00:12:16.129 09:11:59 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:12:16.129 * Looking for test storage... 00:12:16.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:16.129 09:12:00 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:16.129 09:12:00 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:12:16.129 09:12:00 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:16.386 09:12:00 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@345 -- # : 1 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.386 09:12:00 bdev_raid -- scripts/common.sh@368 -- # return 0 00:12:16.386 09:12:00 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.386 09:12:00 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:16.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.386 --rc genhtml_branch_coverage=1 00:12:16.386 --rc genhtml_function_coverage=1 00:12:16.386 --rc genhtml_legend=1 00:12:16.386 --rc geninfo_all_blocks=1 00:12:16.386 --rc geninfo_unexecuted_blocks=1 00:12:16.386 00:12:16.386 ' 00:12:16.386 09:12:00 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:16.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.386 --rc genhtml_branch_coverage=1 00:12:16.386 --rc genhtml_function_coverage=1 00:12:16.386 --rc genhtml_legend=1 00:12:16.386 --rc geninfo_all_blocks=1 00:12:16.386 --rc geninfo_unexecuted_blocks=1 00:12:16.386 00:12:16.386 ' 00:12:16.386 09:12:00 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:16.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.386 --rc genhtml_branch_coverage=1 00:12:16.386 --rc genhtml_function_coverage=1 00:12:16.386 --rc genhtml_legend=1 00:12:16.386 --rc geninfo_all_blocks=1 00:12:16.386 --rc geninfo_unexecuted_blocks=1 00:12:16.386 00:12:16.386 ' 00:12:16.386 09:12:00 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:16.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.386 --rc genhtml_branch_coverage=1 00:12:16.386 --rc genhtml_function_coverage=1 00:12:16.386 --rc genhtml_legend=1 00:12:16.386 --rc geninfo_all_blocks=1 00:12:16.386 --rc geninfo_unexecuted_blocks=1 00:12:16.386 00:12:16.386 ' 00:12:16.386 09:12:00 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:16.386 09:12:00 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:12:16.386 09:12:00 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:12:16.386 09:12:00 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:12:16.386 09:12:00 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:12:16.386 09:12:00 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:12:16.386 09:12:00 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:12:16.386 09:12:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:16.386 09:12:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.386 09:12:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:16.386 ************************************ 00:12:16.386 START TEST raid1_resize_data_offset_test 00:12:16.386 ************************************ 00:12:16.386 09:12:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:12:16.386 09:12:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60139 00:12:16.386 Process raid pid: 60139 00:12:16.386 09:12:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60139' 00:12:16.387 09:12:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:16.387 09:12:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60139 00:12:16.387 09:12:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 60139 ']' 00:12:16.387 09:12:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.387 09:12:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:16.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.387 09:12:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.387 09:12:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:16.387 09:12:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.387 [2024-10-15 09:12:00.281436] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:12:16.387 [2024-10-15 09:12:00.281632] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.647 [2024-10-15 09:12:00.464754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.906 [2024-10-15 09:12:00.636687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.165 [2024-10-15 09:12:00.877698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.165 [2024-10-15 09:12:00.877786] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.423 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:17.423 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:12:17.423 09:12:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:12:17.423 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.423 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.682 malloc0 00:12:17.682 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.682 09:12:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:12:17.682 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.682 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.682 malloc1 00:12:17.682 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.682 09:12:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:12:17.682 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.682 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.682 null0 00:12:17.682 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.682 09:12:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:12:17.682 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.682 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.682 [2024-10-15 09:12:01.486849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:12:17.682 [2024-10-15 09:12:01.489324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:17.682 [2024-10-15 09:12:01.489398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:12:17.682 [2024-10-15 09:12:01.489616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:17.682 [2024-10-15 09:12:01.489645] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:12:17.682 [2024-10-15 09:12:01.489992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:12:17.682 [2024-10-15 09:12:01.490251] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:17.682 [2024-10-15 09:12:01.490284] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:17.683 [2024-10-15 09:12:01.490463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.683 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.683 09:12:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.683 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.683 09:12:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:12:17.683 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.683 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.683 09:12:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:12:17.683 09:12:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:12:17.683 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.683 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.683 [2024-10-15 09:12:01.546955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:12:17.683 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.683 09:12:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:12:17.683 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.683 09:12:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.250 malloc2 00:12:18.250 09:12:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.250 09:12:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:12:18.250 09:12:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.250 09:12:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.250 [2024-10-15 09:12:02.139495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:18.250 [2024-10-15 09:12:02.157563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:18.250 09:12:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.250 [2024-10-15 09:12:02.160241] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:12:18.250 09:12:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.250 09:12:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:12:18.250 09:12:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.250 09:12:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.250 09:12:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.509 09:12:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:12:18.509 09:12:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60139 00:12:18.509 09:12:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 60139 ']' 00:12:18.509 09:12:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 60139 00:12:18.509 09:12:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:12:18.509 09:12:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:18.509 09:12:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60139 00:12:18.509 09:12:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:18.509 09:12:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:18.509 killing process with pid 60139 00:12:18.509 09:12:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60139' 00:12:18.509 09:12:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 60139 00:12:18.509 09:12:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 60139 00:12:18.510 [2024-10-15 09:12:02.242168] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.510 [2024-10-15 09:12:02.242387] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:12:18.510 [2024-10-15 09:12:02.242466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.510 [2024-10-15 09:12:02.242495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:12:18.510 [2024-10-15 09:12:02.275818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.510 [2024-10-15 09:12:02.276337] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.510 [2024-10-15 09:12:02.276372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:20.412 [2024-10-15 09:12:04.029667] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:21.348 09:12:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:12:21.348 00:12:21.348 real 0m5.003s 00:12:21.348 user 0m4.907s 00:12:21.348 sys 0m0.753s 00:12:21.348 09:12:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.348 09:12:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.348 ************************************ 00:12:21.348 END TEST raid1_resize_data_offset_test 00:12:21.348 ************************************ 00:12:21.348 09:12:05 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:12:21.348 09:12:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:21.348 09:12:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.348 09:12:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:21.348 ************************************ 00:12:21.348 START TEST raid0_resize_superblock_test 00:12:21.348 ************************************ 00:12:21.348 09:12:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:12:21.348 09:12:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:12:21.348 09:12:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60228 00:12:21.348 Process raid pid: 60228 00:12:21.348 09:12:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60228' 00:12:21.348 09:12:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60228 00:12:21.348 09:12:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60228 ']' 00:12:21.348 09:12:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:21.348 09:12:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.348 09:12:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:21.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.348 09:12:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.348 09:12:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:21.348 09:12:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.607 [2024-10-15 09:12:05.323420] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:12:21.607 [2024-10-15 09:12:05.323661] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.607 [2024-10-15 09:12:05.507081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.865 [2024-10-15 09:12:05.655022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.124 [2024-10-15 09:12:05.866669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.124 [2024-10-15 09:12:05.866769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.691 09:12:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:22.691 09:12:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:22.691 09:12:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:12:22.691 09:12:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.691 09:12:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.258 malloc0 00:12:23.258 09:12:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.258 09:12:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:12:23.258 09:12:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.258 09:12:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.258 [2024-10-15 09:12:06.952284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:23.258 [2024-10-15 09:12:06.952399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.258 [2024-10-15 09:12:06.952441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:23.258 [2024-10-15 09:12:06.952462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.258 [2024-10-15 09:12:06.955664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.258 [2024-10-15 09:12:06.955731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:23.258 pt0 00:12:23.258 09:12:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.258 09:12:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:12:23.258 09:12:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.258 09:12:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.258 76d7b6af-6550-4b06-974c-9fd6fd667dca 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.258 2530da97-df01-415f-98d5-d2b7d02b0261 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.258 2228ca09-bf8b-4cf3-bfad-2b5bd4ef1363 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.258 [2024-10-15 09:12:07.148484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2530da97-df01-415f-98d5-d2b7d02b0261 is claimed 00:12:23.258 [2024-10-15 09:12:07.148669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2228ca09-bf8b-4cf3-bfad-2b5bd4ef1363 is claimed 00:12:23.258 [2024-10-15 09:12:07.148944] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:23.258 [2024-10-15 09:12:07.149011] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:12:23.258 [2024-10-15 09:12:07.149565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:23.258 [2024-10-15 09:12:07.149983] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:23.258 [2024-10-15 09:12:07.150024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:23.258 [2024-10-15 09:12:07.150360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.258 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.541 [2024-10-15 09:12:07.268821] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.541 [2024-10-15 09:12:07.316782] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:23.541 [2024-10-15 09:12:07.316824] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2530da97-df01-415f-98d5-d2b7d02b0261' was resized: old size 131072, new size 204800 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.541 [2024-10-15 09:12:07.324616] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:23.541 [2024-10-15 09:12:07.324647] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2228ca09-bf8b-4cf3-bfad-2b5bd4ef1363' was resized: old size 131072, new size 204800 00:12:23.541 [2024-10-15 09:12:07.324699] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.541 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.542 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:12:23.542 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:23.542 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:23.542 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:23.542 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:12:23.542 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.542 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.542 [2024-10-15 09:12:07.436764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.542 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.801 [2024-10-15 09:12:07.488501] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:12:23.801 [2024-10-15 09:12:07.488601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:12:23.801 [2024-10-15 09:12:07.488620] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.801 [2024-10-15 09:12:07.488642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:12:23.801 [2024-10-15 09:12:07.488788] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.801 [2024-10-15 09:12:07.488838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.801 [2024-10-15 09:12:07.488858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.801 [2024-10-15 09:12:07.496414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:23.801 [2024-10-15 09:12:07.496489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.801 [2024-10-15 09:12:07.496521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:12:23.801 [2024-10-15 09:12:07.496541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.801 [2024-10-15 09:12:07.499603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.801 [2024-10-15 09:12:07.499652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:23.801 pt0 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.801 [2024-10-15 09:12:07.501997] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2530da97-df01-415f-98d5-d2b7d02b0261 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.801 [2024-10-15 09:12:07.502113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2530da97-df01-415f-98d5-d2b7d02b0261 is claimed 00:12:23.801 [2024-10-15 09:12:07.502272] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2228ca09-bf8b-4cf3-bfad-2b5bd4ef1363 00:12:23.801 [2024-10-15 09:12:07.502306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2228ca09-bf8b-4cf3-bfad-2b5bd4ef1363 is claimed 00:12:23.801 [2024-10-15 09:12:07.502466] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 2228ca09-bf8b-4cf3-bfad-2b5bd4ef1363 (2) smaller than existing raid bdev Raid (3) 00:12:23.801 [2024-10-15 09:12:07.502514] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 2530da97-df01-415f-98d5-d2b7d02b0261: File exists 00:12:23.801 [2024-10-15 09:12:07.502578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:23.801 [2024-10-15 09:12:07.502597] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:12:23.801 [2024-10-15 09:12:07.502928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:23.801 [2024-10-15 09:12:07.503153] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:23.801 [2024-10-15 09:12:07.503173] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:12:23.801 [2024-10-15 09:12:07.503369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:12:23.801 [2024-10-15 09:12:07.516758] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60228 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60228 ']' 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60228 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60228 00:12:23.801 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:23.802 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:23.802 killing process with pid 60228 00:12:23.802 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60228' 00:12:23.802 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60228 00:12:23.802 [2024-10-15 09:12:07.596050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.802 09:12:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60228 00:12:23.802 [2024-10-15 09:12:07.596157] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.802 [2024-10-15 09:12:07.596224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.802 [2024-10-15 09:12:07.596244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:12:25.178 [2024-10-15 09:12:08.996008] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:26.555 09:12:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:12:26.555 00:12:26.555 real 0m4.889s 00:12:26.555 user 0m5.146s 00:12:26.555 sys 0m0.755s 00:12:26.555 09:12:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:26.555 09:12:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.555 ************************************ 00:12:26.555 END TEST raid0_resize_superblock_test 00:12:26.555 ************************************ 00:12:26.555 09:12:10 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:12:26.555 09:12:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:26.555 09:12:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:26.555 09:12:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:26.555 ************************************ 00:12:26.555 START TEST raid1_resize_superblock_test 00:12:26.555 ************************************ 00:12:26.555 09:12:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:12:26.555 09:12:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:12:26.555 09:12:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60332 00:12:26.555 Process raid pid: 60332 00:12:26.555 09:12:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60332' 00:12:26.555 09:12:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:26.555 09:12:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60332 00:12:26.555 09:12:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60332 ']' 00:12:26.555 09:12:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.555 09:12:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:26.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.555 09:12:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.555 09:12:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:26.555 09:12:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.555 [2024-10-15 09:12:10.259347] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:12:26.555 [2024-10-15 09:12:10.259563] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.555 [2024-10-15 09:12:10.438434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.814 [2024-10-15 09:12:10.593540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.072 [2024-10-15 09:12:10.818878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.072 [2024-10-15 09:12:10.818923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.330 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:27.330 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:27.330 09:12:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:12:27.330 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.330 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.896 malloc0 00:12:27.896 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.896 09:12:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:12:27.896 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.896 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.896 [2024-10-15 09:12:11.793434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:27.896 [2024-10-15 09:12:11.793515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.896 [2024-10-15 09:12:11.793553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:27.896 [2024-10-15 09:12:11.793573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.896 [2024-10-15 09:12:11.796471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.896 [2024-10-15 09:12:11.796523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:27.896 pt0 00:12:27.896 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.896 09:12:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:12:27.896 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.896 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.154 57021461-1901-4090-b7ea-df6535ff529e 00:12:28.154 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.154 09:12:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:12:28.154 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.154 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.154 48293058-deb8-4fa0-9350-49564b57413a 00:12:28.154 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.154 09:12:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:12:28.154 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.154 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.154 6e4698d5-aeb4-4036-b9d2-60695479bbc7 00:12:28.154 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.154 09:12:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:12:28.154 09:12:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:12:28.154 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.154 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.154 [2024-10-15 09:12:11.986997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 48293058-deb8-4fa0-9350-49564b57413a is claimed 00:12:28.154 [2024-10-15 09:12:11.987171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6e4698d5-aeb4-4036-b9d2-60695479bbc7 is claimed 00:12:28.154 [2024-10-15 09:12:11.987397] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:28.154 [2024-10-15 09:12:11.987443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:12:28.154 [2024-10-15 09:12:11.987779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:28.154 [2024-10-15 09:12:11.988077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:28.154 [2024-10-15 09:12:11.988104] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:28.154 [2024-10-15 09:12:11.988317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.154 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.154 09:12:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:12:28.154 09:12:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:12:28.154 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.154 09:12:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.154 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.154 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:12:28.154 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:12:28.154 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.154 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.154 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:12:28.154 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:12:28.413 [2024-10-15 09:12:12.099435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.413 [2024-10-15 09:12:12.143415] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:28.413 [2024-10-15 09:12:12.143460] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '48293058-deb8-4fa0-9350-49564b57413a' was resized: old size 131072, new size 204800 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.413 [2024-10-15 09:12:12.151221] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:28.413 [2024-10-15 09:12:12.151254] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '6e4698d5-aeb4-4036-b9d2-60695479bbc7' was resized: old size 131072, new size 204800 00:12:28.413 [2024-10-15 09:12:12.151291] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.413 [2024-10-15 09:12:12.271403] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.413 [2024-10-15 09:12:12.319110] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:12:28.413 [2024-10-15 09:12:12.319272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:12:28.413 [2024-10-15 09:12:12.319312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:12:28.413 [2024-10-15 09:12:12.319543] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.413 [2024-10-15 09:12:12.319862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.413 [2024-10-15 09:12:12.319972] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.413 [2024-10-15 09:12:12.319997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.413 [2024-10-15 09:12:12.327006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:28.413 [2024-10-15 09:12:12.327111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.413 [2024-10-15 09:12:12.327158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:12:28.413 [2024-10-15 09:12:12.327180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.413 [2024-10-15 09:12:12.330312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.413 [2024-10-15 09:12:12.330361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:28.413 pt0 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.413 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.413 [2024-10-15 09:12:12.332732] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 48293058-deb8-4fa0-9350-49564b57413a 00:12:28.413 [2024-10-15 09:12:12.332822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 48293058-deb8-4fa0-9350-49564b57413a is claimed 00:12:28.413 [2024-10-15 09:12:12.332967] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 6e4698d5-aeb4-4036-b9d2-60695479bbc7 00:12:28.413 [2024-10-15 09:12:12.333002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6e4698d5-aeb4-4036-b9d2-60695479bbc7 is claimed 00:12:28.413 [2024-10-15 09:12:12.333182] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 6e4698d5-aeb4-4036-b9d2-60695479bbc7 (2) smaller than existing raid bdev Raid (3) 00:12:28.413 [2024-10-15 09:12:12.333215] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 48293058-deb8-4fa0-9350-49564b57413a: File exists 00:12:28.413 [2024-10-15 09:12:12.333270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:28.413 [2024-10-15 09:12:12.333303] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:28.413 [2024-10-15 09:12:12.333630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:28.414 [2024-10-15 09:12:12.333845] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:28.414 [2024-10-15 09:12:12.333873] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:12:28.414 [2024-10-15 09:12:12.334063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.672 [2024-10-15 09:12:12.347502] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60332 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60332 ']' 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60332 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60332 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:28.672 killing process with pid 60332 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60332' 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60332 00:12:28.672 [2024-10-15 09:12:12.433344] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:28.672 09:12:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60332 00:12:28.672 [2024-10-15 09:12:12.433468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.672 [2024-10-15 09:12:12.433551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.672 [2024-10-15 09:12:12.433567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:12:30.062 [2024-10-15 09:12:13.810916] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:31.440 09:12:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:12:31.440 00:12:31.440 real 0m4.817s 00:12:31.440 user 0m4.978s 00:12:31.440 sys 0m0.753s 00:12:31.440 09:12:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.440 09:12:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.440 ************************************ 00:12:31.440 END TEST raid1_resize_superblock_test 00:12:31.440 ************************************ 00:12:31.440 09:12:15 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:12:31.440 09:12:15 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:12:31.440 09:12:15 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:12:31.440 09:12:15 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:12:31.440 09:12:15 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:12:31.440 09:12:15 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:12:31.440 09:12:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:31.440 09:12:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.440 09:12:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:31.440 ************************************ 00:12:31.440 START TEST raid_function_test_raid0 00:12:31.440 ************************************ 00:12:31.440 09:12:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:12:31.440 09:12:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:12:31.440 09:12:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:12:31.440 09:12:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:12:31.440 09:12:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60435 00:12:31.440 09:12:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:31.440 Process raid pid: 60435 00:12:31.440 09:12:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60435' 00:12:31.440 09:12:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60435 00:12:31.440 09:12:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 60435 ']' 00:12:31.440 09:12:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.440 09:12:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:31.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.440 09:12:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.440 09:12:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:31.440 09:12:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:31.440 [2024-10-15 09:12:15.143424] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:12:31.440 [2024-10-15 09:12:15.143646] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.440 [2024-10-15 09:12:15.325343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.699 [2024-10-15 09:12:15.493825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.959 [2024-10-15 09:12:15.726953] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.959 [2024-10-15 09:12:15.727032] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:32.528 Base_1 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:32.528 Base_2 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:32.528 [2024-10-15 09:12:16.265997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:32.528 [2024-10-15 09:12:16.268648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:32.528 [2024-10-15 09:12:16.268755] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:32.528 [2024-10-15 09:12:16.268777] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:32.528 [2024-10-15 09:12:16.269140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:32.528 [2024-10-15 09:12:16.269359] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:32.528 [2024-10-15 09:12:16.269386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:12:32.528 [2024-10-15 09:12:16.269582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:32.528 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:12:32.787 [2024-10-15 09:12:16.654260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:32.787 /dev/nbd0 00:12:32.787 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:32.787 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:32.787 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:32.787 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:12:32.787 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:32.787 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:32.787 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:32.787 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:12:32.787 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:32.787 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:32.787 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.787 1+0 records in 00:12:32.787 1+0 records out 00:12:32.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407026 s, 10.1 MB/s 00:12:32.787 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.787 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:12:32.787 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.787 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:32.787 09:12:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:12:32.787 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.787 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.047 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:12:33.047 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.047 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:12:33.306 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:33.306 { 00:12:33.306 "nbd_device": "/dev/nbd0", 00:12:33.306 "bdev_name": "raid" 00:12:33.306 } 00:12:33.306 ]' 00:12:33.306 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:33.306 09:12:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:33.306 { 00:12:33.306 "nbd_device": "/dev/nbd0", 00:12:33.306 "bdev_name": "raid" 00:12:33.306 } 00:12:33.306 ]' 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:12:33.306 4096+0 records in 00:12:33.306 4096+0 records out 00:12:33.306 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0258324 s, 81.2 MB/s 00:12:33.306 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:12:33.564 4096+0 records in 00:12:33.564 4096+0 records out 00:12:33.564 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.322506 s, 6.5 MB/s 00:12:33.564 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:12:33.564 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:33.564 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:12:33.564 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:33.564 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:12:33.564 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:12:33.564 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:12:33.564 128+0 records in 00:12:33.564 128+0 records out 00:12:33.564 65536 bytes (66 kB, 64 KiB) copied, 0.000712299 s, 92.0 MB/s 00:12:33.564 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:12:33.564 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:33.564 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:33.564 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:33.564 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:33.564 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:12:33.564 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:12:33.564 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:12:33.564 2035+0 records in 00:12:33.564 2035+0 records out 00:12:33.564 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0125979 s, 82.7 MB/s 00:12:33.564 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:12:33.564 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:33.565 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:33.565 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:33.565 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:33.565 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:12:33.565 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:12:33.565 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:12:33.823 456+0 records in 00:12:33.823 456+0 records out 00:12:33.823 233472 bytes (233 kB, 228 KiB) copied, 0.00306748 s, 76.1 MB/s 00:12:33.823 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:12:33.823 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:33.823 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:33.823 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:33.823 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:33.823 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:12:33.823 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:33.823 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.823 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:33.823 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:33.823 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:12:33.823 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.823 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:34.082 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:34.082 [2024-10-15 09:12:17.820292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.082 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:34.082 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:34.082 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.082 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.082 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:34.082 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:12:34.082 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.082 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:12:34.082 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:12:34.082 09:12:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60435 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 60435 ']' 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 60435 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60435 00:12:34.340 killing process with pid 60435 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60435' 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 60435 00:12:34.340 [2024-10-15 09:12:18.226402] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:34.340 09:12:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 60435 00:12:34.340 [2024-10-15 09:12:18.226549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.340 [2024-10-15 09:12:18.226651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.340 [2024-10-15 09:12:18.226672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:12:34.598 [2024-10-15 09:12:18.423054] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:35.973 ************************************ 00:12:35.973 END TEST raid_function_test_raid0 00:12:35.973 ************************************ 00:12:35.973 09:12:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:12:35.973 00:12:35.973 real 0m4.508s 00:12:35.973 user 0m5.518s 00:12:35.973 sys 0m1.104s 00:12:35.973 09:12:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:35.973 09:12:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:35.973 09:12:19 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:12:35.973 09:12:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:35.973 09:12:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:35.973 09:12:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:35.973 ************************************ 00:12:35.973 START TEST raid_function_test_concat 00:12:35.973 ************************************ 00:12:35.973 09:12:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:12:35.973 09:12:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:12:35.973 09:12:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:12:35.973 09:12:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:12:35.973 Process raid pid: 60569 00:12:35.973 09:12:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60569 00:12:35.973 09:12:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:35.973 09:12:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60569' 00:12:35.973 09:12:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60569 00:12:35.973 09:12:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 60569 ']' 00:12:35.973 09:12:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.973 09:12:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:35.973 09:12:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.973 09:12:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:35.973 09:12:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:35.973 [2024-10-15 09:12:19.691081] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:12:35.973 [2024-10-15 09:12:19.691547] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.973 [2024-10-15 09:12:19.860031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.232 [2024-10-15 09:12:20.013950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.490 [2024-10-15 09:12:20.243859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.490 [2024-10-15 09:12:20.243927] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.749 09:12:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:36.749 09:12:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:12:36.749 09:12:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:12:36.749 09:12:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.749 09:12:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:37.008 Base_1 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:37.008 Base_2 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:37.008 [2024-10-15 09:12:20.762213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:37.008 [2024-10-15 09:12:20.765009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:37.008 [2024-10-15 09:12:20.765145] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:37.008 [2024-10-15 09:12:20.765181] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:37.008 [2024-10-15 09:12:20.765558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:37.008 [2024-10-15 09:12:20.765934] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:37.008 [2024-10-15 09:12:20.765958] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:12:37.008 [2024-10-15 09:12:20.766224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:37.008 09:12:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:12:37.266 [2024-10-15 09:12:21.102381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:37.266 /dev/nbd0 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.266 1+0 records in 00:12:37.266 1+0 records out 00:12:37.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314049 s, 13.0 MB/s 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:12:37.266 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:12:37.524 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:37.524 { 00:12:37.524 "nbd_device": "/dev/nbd0", 00:12:37.524 "bdev_name": "raid" 00:12:37.524 } 00:12:37.524 ]' 00:12:37.524 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:37.524 { 00:12:37.524 "nbd_device": "/dev/nbd0", 00:12:37.524 "bdev_name": "raid" 00:12:37.524 } 00:12:37.524 ]' 00:12:37.524 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:37.782 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:12:37.782 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:12:37.782 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:37.782 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:12:37.782 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:12:37.782 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:12:37.782 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:12:37.782 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:12:37.782 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:12:37.782 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:12:37.782 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:12:37.782 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:12:37.783 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:12:37.783 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:12:37.783 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:12:37.783 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:12:37.783 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:12:37.783 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:12:37.783 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:12:37.783 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:12:37.783 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:12:37.783 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:12:37.783 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:12:37.783 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:12:37.783 4096+0 records in 00:12:37.783 4096+0 records out 00:12:37.783 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0422253 s, 49.7 MB/s 00:12:37.783 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:12:38.041 4096+0 records in 00:12:38.041 4096+0 records out 00:12:38.041 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.356725 s, 5.9 MB/s 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:12:38.041 128+0 records in 00:12:38.041 128+0 records out 00:12:38.041 65536 bytes (66 kB, 64 KiB) copied, 0.000630566 s, 104 MB/s 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:12:38.041 2035+0 records in 00:12:38.041 2035+0 records out 00:12:38.041 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0110844 s, 94.0 MB/s 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:12:38.041 456+0 records in 00:12:38.041 456+0 records out 00:12:38.041 233472 bytes (233 kB, 228 KiB) copied, 0.00304296 s, 76.7 MB/s 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:38.041 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:38.300 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:38.300 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:38.300 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:12:38.300 09:12:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:38.300 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:38.300 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:38.300 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:38.300 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:12:38.300 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.300 09:12:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:38.558 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:38.558 [2024-10-15 09:12:22.281604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.558 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:38.558 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:38.558 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.558 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.558 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:38.558 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:12:38.558 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.558 09:12:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:12:38.558 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:12:38.558 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60569 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 60569 ']' 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 60569 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60569 00:12:38.817 killing process with pid 60569 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60569' 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 60569 00:12:38.817 [2024-10-15 09:12:22.649215] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.817 09:12:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 60569 00:12:38.817 [2024-10-15 09:12:22.649357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.817 [2024-10-15 09:12:22.649432] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.817 [2024-10-15 09:12:22.649458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:12:39.076 [2024-10-15 09:12:22.851315] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:40.452 09:12:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:12:40.452 00:12:40.452 real 0m4.391s 00:12:40.452 user 0m5.266s 00:12:40.452 sys 0m1.074s 00:12:40.452 09:12:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:40.452 09:12:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:40.452 ************************************ 00:12:40.452 END TEST raid_function_test_concat 00:12:40.452 ************************************ 00:12:40.452 09:12:24 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:12:40.452 09:12:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:40.452 09:12:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.452 09:12:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:40.452 ************************************ 00:12:40.452 START TEST raid0_resize_test 00:12:40.452 ************************************ 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60698 00:12:40.452 Process raid pid: 60698 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60698' 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60698 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60698 ']' 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:40.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:40.452 09:12:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.452 [2024-10-15 09:12:24.140086] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:12:40.452 [2024-10-15 09:12:24.140270] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.452 [2024-10-15 09:12:24.310101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.711 [2024-10-15 09:12:24.459958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.970 [2024-10-15 09:12:24.689250] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.970 [2024-10-15 09:12:24.689326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.228 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:41.228 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:12:41.228 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:12:41.228 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.228 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.503 Base_1 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.503 Base_2 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.503 [2024-10-15 09:12:25.170983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:41.503 [2024-10-15 09:12:25.173742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:41.503 [2024-10-15 09:12:25.173816] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:41.503 [2024-10-15 09:12:25.173833] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:41.503 [2024-10-15 09:12:25.174176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:12:41.503 [2024-10-15 09:12:25.174336] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:41.503 [2024-10-15 09:12:25.174352] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:41.503 [2024-10-15 09:12:25.174519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.503 [2024-10-15 09:12:25.178971] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:41.503 [2024-10-15 09:12:25.179004] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:12:41.503 true 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.503 [2024-10-15 09:12:25.191226] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.503 [2024-10-15 09:12:25.247071] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:41.503 [2024-10-15 09:12:25.247275] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:12:41.503 [2024-10-15 09:12:25.247345] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:12:41.503 true 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.503 [2024-10-15 09:12:25.259237] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60698 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60698 ']' 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 60698 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60698 00:12:41.503 killing process with pid 60698 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60698' 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 60698 00:12:41.503 [2024-10-15 09:12:25.343026] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:41.503 09:12:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 60698 00:12:41.503 [2024-10-15 09:12:25.343174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.503 [2024-10-15 09:12:25.343252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.503 [2024-10-15 09:12:25.343267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:41.503 [2024-10-15 09:12:25.359691] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.881 ************************************ 00:12:42.881 END TEST raid0_resize_test 00:12:42.881 ************************************ 00:12:42.881 09:12:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:12:42.881 00:12:42.881 real 0m2.439s 00:12:42.881 user 0m2.674s 00:12:42.881 sys 0m0.428s 00:12:42.881 09:12:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:42.881 09:12:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.881 09:12:26 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:12:42.881 09:12:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:42.881 09:12:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:42.881 09:12:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.881 ************************************ 00:12:42.881 START TEST raid1_resize_test 00:12:42.882 ************************************ 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:12:42.882 Process raid pid: 60760 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60760 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60760' 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60760 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60760 ']' 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:42.882 09:12:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.882 [2024-10-15 09:12:26.651207] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:12:42.882 [2024-10-15 09:12:26.651647] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.140 [2024-10-15 09:12:26.824340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.140 [2024-10-15 09:12:26.974364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.399 [2024-10-15 09:12:27.211891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.399 [2024-10-15 09:12:27.211963] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.996 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:43.996 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:12:43.996 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:12:43.996 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.996 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.996 Base_1 00:12:43.996 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.996 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:12:43.996 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.996 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.996 Base_2 00:12:43.996 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.996 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:12:43.996 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:12:43.996 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.996 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.996 [2024-10-15 09:12:27.659347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:43.997 [2024-10-15 09:12:27.663154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:43.997 [2024-10-15 09:12:27.663313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:43.997 [2024-10-15 09:12:27.663368] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:43.997 [2024-10-15 09:12:27.663809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:12:43.997 [2024-10-15 09:12:27.664124] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:43.997 [2024-10-15 09:12:27.664150] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:43.997 [2024-10-15 09:12:27.664505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.997 [2024-10-15 09:12:27.671397] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:43.997 [2024-10-15 09:12:27.671436] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:12:43.997 true 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.997 [2024-10-15 09:12:27.683610] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.997 [2024-10-15 09:12:27.735491] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:43.997 [2024-10-15 09:12:27.735532] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:12:43.997 [2024-10-15 09:12:27.735583] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:12:43.997 true 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.997 [2024-10-15 09:12:27.747842] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60760 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60760 ']' 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 60760 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60760 00:12:43.997 killing process with pid 60760 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60760' 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 60760 00:12:43.997 [2024-10-15 09:12:27.824735] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:43.997 09:12:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 60760 00:12:43.997 [2024-10-15 09:12:27.824868] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.997 [2024-10-15 09:12:27.825567] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.997 [2024-10-15 09:12:27.825595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:43.997 [2024-10-15 09:12:27.841823] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.400 ************************************ 00:12:45.400 END TEST raid1_resize_test 00:12:45.400 ************************************ 00:12:45.400 09:12:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:12:45.400 00:12:45.400 real 0m2.435s 00:12:45.400 user 0m2.654s 00:12:45.400 sys 0m0.420s 00:12:45.400 09:12:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:45.400 09:12:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.400 09:12:29 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:45.400 09:12:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:45.400 09:12:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:12:45.400 09:12:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:45.400 09:12:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:45.400 09:12:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:45.400 ************************************ 00:12:45.400 START TEST raid_state_function_test 00:12:45.400 ************************************ 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:45.400 Process raid pid: 60822 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60822 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60822' 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60822 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 60822 ']' 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:45.400 09:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.400 [2024-10-15 09:12:29.143597] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:12:45.400 [2024-10-15 09:12:29.143796] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.659 [2024-10-15 09:12:29.327319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.659 [2024-10-15 09:12:29.479366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.918 [2024-10-15 09:12:29.708398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.918 [2024-10-15 09:12:29.708467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.486 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:46.486 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:46.486 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:46.486 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.486 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.486 [2024-10-15 09:12:30.181641] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.486 [2024-10-15 09:12:30.181736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.486 [2024-10-15 09:12:30.181755] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:46.486 [2024-10-15 09:12:30.181773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:46.486 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.486 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:46.486 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.486 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.486 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:46.486 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.486 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:46.486 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.486 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.486 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.487 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.487 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.487 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.487 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.487 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.487 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.487 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.487 "name": "Existed_Raid", 00:12:46.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.487 "strip_size_kb": 64, 00:12:46.487 "state": "configuring", 00:12:46.487 "raid_level": "raid0", 00:12:46.487 "superblock": false, 00:12:46.487 "num_base_bdevs": 2, 00:12:46.487 "num_base_bdevs_discovered": 0, 00:12:46.487 "num_base_bdevs_operational": 2, 00:12:46.487 "base_bdevs_list": [ 00:12:46.487 { 00:12:46.487 "name": "BaseBdev1", 00:12:46.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.487 "is_configured": false, 00:12:46.487 "data_offset": 0, 00:12:46.487 "data_size": 0 00:12:46.487 }, 00:12:46.487 { 00:12:46.487 "name": "BaseBdev2", 00:12:46.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.487 "is_configured": false, 00:12:46.487 "data_offset": 0, 00:12:46.487 "data_size": 0 00:12:46.487 } 00:12:46.487 ] 00:12:46.487 }' 00:12:46.487 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.487 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.054 [2024-10-15 09:12:30.725686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:47.054 [2024-10-15 09:12:30.725904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.054 [2024-10-15 09:12:30.733683] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:47.054 [2024-10-15 09:12:30.733755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:47.054 [2024-10-15 09:12:30.733771] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:47.054 [2024-10-15 09:12:30.733790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.054 [2024-10-15 09:12:30.782545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.054 BaseBdev1 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.054 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.054 [ 00:12:47.054 { 00:12:47.054 "name": "BaseBdev1", 00:12:47.054 "aliases": [ 00:12:47.054 "7f1ae019-6d41-4f53-8091-e898f9117cb8" 00:12:47.054 ], 00:12:47.054 "product_name": "Malloc disk", 00:12:47.054 "block_size": 512, 00:12:47.054 "num_blocks": 65536, 00:12:47.054 "uuid": "7f1ae019-6d41-4f53-8091-e898f9117cb8", 00:12:47.054 "assigned_rate_limits": { 00:12:47.054 "rw_ios_per_sec": 0, 00:12:47.054 "rw_mbytes_per_sec": 0, 00:12:47.054 "r_mbytes_per_sec": 0, 00:12:47.054 "w_mbytes_per_sec": 0 00:12:47.054 }, 00:12:47.054 "claimed": true, 00:12:47.054 "claim_type": "exclusive_write", 00:12:47.054 "zoned": false, 00:12:47.054 "supported_io_types": { 00:12:47.054 "read": true, 00:12:47.054 "write": true, 00:12:47.054 "unmap": true, 00:12:47.054 "flush": true, 00:12:47.054 "reset": true, 00:12:47.054 "nvme_admin": false, 00:12:47.054 "nvme_io": false, 00:12:47.054 "nvme_io_md": false, 00:12:47.054 "write_zeroes": true, 00:12:47.054 "zcopy": true, 00:12:47.054 "get_zone_info": false, 00:12:47.054 "zone_management": false, 00:12:47.054 "zone_append": false, 00:12:47.054 "compare": false, 00:12:47.055 "compare_and_write": false, 00:12:47.055 "abort": true, 00:12:47.055 "seek_hole": false, 00:12:47.055 "seek_data": false, 00:12:47.055 "copy": true, 00:12:47.055 "nvme_iov_md": false 00:12:47.055 }, 00:12:47.055 "memory_domains": [ 00:12:47.055 { 00:12:47.055 "dma_device_id": "system", 00:12:47.055 "dma_device_type": 1 00:12:47.055 }, 00:12:47.055 { 00:12:47.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.055 "dma_device_type": 2 00:12:47.055 } 00:12:47.055 ], 00:12:47.055 "driver_specific": {} 00:12:47.055 } 00:12:47.055 ] 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.055 "name": "Existed_Raid", 00:12:47.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.055 "strip_size_kb": 64, 00:12:47.055 "state": "configuring", 00:12:47.055 "raid_level": "raid0", 00:12:47.055 "superblock": false, 00:12:47.055 "num_base_bdevs": 2, 00:12:47.055 "num_base_bdevs_discovered": 1, 00:12:47.055 "num_base_bdevs_operational": 2, 00:12:47.055 "base_bdevs_list": [ 00:12:47.055 { 00:12:47.055 "name": "BaseBdev1", 00:12:47.055 "uuid": "7f1ae019-6d41-4f53-8091-e898f9117cb8", 00:12:47.055 "is_configured": true, 00:12:47.055 "data_offset": 0, 00:12:47.055 "data_size": 65536 00:12:47.055 }, 00:12:47.055 { 00:12:47.055 "name": "BaseBdev2", 00:12:47.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.055 "is_configured": false, 00:12:47.055 "data_offset": 0, 00:12:47.055 "data_size": 0 00:12:47.055 } 00:12:47.055 ] 00:12:47.055 }' 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.055 09:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.622 [2024-10-15 09:12:31.334825] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:47.622 [2024-10-15 09:12:31.334897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.622 [2024-10-15 09:12:31.342894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.622 [2024-10-15 09:12:31.345745] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:47.622 [2024-10-15 09:12:31.345942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.622 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.623 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.623 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.623 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.623 "name": "Existed_Raid", 00:12:47.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.623 "strip_size_kb": 64, 00:12:47.623 "state": "configuring", 00:12:47.623 "raid_level": "raid0", 00:12:47.623 "superblock": false, 00:12:47.623 "num_base_bdevs": 2, 00:12:47.623 "num_base_bdevs_discovered": 1, 00:12:47.623 "num_base_bdevs_operational": 2, 00:12:47.623 "base_bdevs_list": [ 00:12:47.623 { 00:12:47.623 "name": "BaseBdev1", 00:12:47.623 "uuid": "7f1ae019-6d41-4f53-8091-e898f9117cb8", 00:12:47.623 "is_configured": true, 00:12:47.623 "data_offset": 0, 00:12:47.623 "data_size": 65536 00:12:47.623 }, 00:12:47.623 { 00:12:47.623 "name": "BaseBdev2", 00:12:47.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.623 "is_configured": false, 00:12:47.623 "data_offset": 0, 00:12:47.623 "data_size": 0 00:12:47.623 } 00:12:47.623 ] 00:12:47.623 }' 00:12:47.623 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.623 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.231 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:48.231 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.231 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.231 [2024-10-15 09:12:31.928582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:48.231 [2024-10-15 09:12:31.928656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:48.231 [2024-10-15 09:12:31.928672] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:48.231 [2024-10-15 09:12:31.929024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:48.231 [2024-10-15 09:12:31.929281] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:48.231 [2024-10-15 09:12:31.929306] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:48.231 [2024-10-15 09:12:31.929664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.232 BaseBdev2 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.232 [ 00:12:48.232 { 00:12:48.232 "name": "BaseBdev2", 00:12:48.232 "aliases": [ 00:12:48.232 "0b2a53f1-4136-451f-8daa-016a0a43ddd2" 00:12:48.232 ], 00:12:48.232 "product_name": "Malloc disk", 00:12:48.232 "block_size": 512, 00:12:48.232 "num_blocks": 65536, 00:12:48.232 "uuid": "0b2a53f1-4136-451f-8daa-016a0a43ddd2", 00:12:48.232 "assigned_rate_limits": { 00:12:48.232 "rw_ios_per_sec": 0, 00:12:48.232 "rw_mbytes_per_sec": 0, 00:12:48.232 "r_mbytes_per_sec": 0, 00:12:48.232 "w_mbytes_per_sec": 0 00:12:48.232 }, 00:12:48.232 "claimed": true, 00:12:48.232 "claim_type": "exclusive_write", 00:12:48.232 "zoned": false, 00:12:48.232 "supported_io_types": { 00:12:48.232 "read": true, 00:12:48.232 "write": true, 00:12:48.232 "unmap": true, 00:12:48.232 "flush": true, 00:12:48.232 "reset": true, 00:12:48.232 "nvme_admin": false, 00:12:48.232 "nvme_io": false, 00:12:48.232 "nvme_io_md": false, 00:12:48.232 "write_zeroes": true, 00:12:48.232 "zcopy": true, 00:12:48.232 "get_zone_info": false, 00:12:48.232 "zone_management": false, 00:12:48.232 "zone_append": false, 00:12:48.232 "compare": false, 00:12:48.232 "compare_and_write": false, 00:12:48.232 "abort": true, 00:12:48.232 "seek_hole": false, 00:12:48.232 "seek_data": false, 00:12:48.232 "copy": true, 00:12:48.232 "nvme_iov_md": false 00:12:48.232 }, 00:12:48.232 "memory_domains": [ 00:12:48.232 { 00:12:48.232 "dma_device_id": "system", 00:12:48.232 "dma_device_type": 1 00:12:48.232 }, 00:12:48.232 { 00:12:48.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.232 "dma_device_type": 2 00:12:48.232 } 00:12:48.232 ], 00:12:48.232 "driver_specific": {} 00:12:48.232 } 00:12:48.232 ] 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.232 09:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.232 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.232 "name": "Existed_Raid", 00:12:48.232 "uuid": "29dd3091-8aa1-4100-8d2a-1bbe42992088", 00:12:48.232 "strip_size_kb": 64, 00:12:48.232 "state": "online", 00:12:48.232 "raid_level": "raid0", 00:12:48.232 "superblock": false, 00:12:48.232 "num_base_bdevs": 2, 00:12:48.232 "num_base_bdevs_discovered": 2, 00:12:48.232 "num_base_bdevs_operational": 2, 00:12:48.232 "base_bdevs_list": [ 00:12:48.232 { 00:12:48.232 "name": "BaseBdev1", 00:12:48.232 "uuid": "7f1ae019-6d41-4f53-8091-e898f9117cb8", 00:12:48.232 "is_configured": true, 00:12:48.232 "data_offset": 0, 00:12:48.232 "data_size": 65536 00:12:48.232 }, 00:12:48.232 { 00:12:48.232 "name": "BaseBdev2", 00:12:48.232 "uuid": "0b2a53f1-4136-451f-8daa-016a0a43ddd2", 00:12:48.232 "is_configured": true, 00:12:48.232 "data_offset": 0, 00:12:48.232 "data_size": 65536 00:12:48.232 } 00:12:48.232 ] 00:12:48.232 }' 00:12:48.232 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.232 09:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.799 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:48.799 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:48.799 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:48.799 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:48.799 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:48.799 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:48.799 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:48.799 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:48.799 09:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.799 09:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.799 [2024-10-15 09:12:32.505185] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.799 09:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.799 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:48.799 "name": "Existed_Raid", 00:12:48.799 "aliases": [ 00:12:48.799 "29dd3091-8aa1-4100-8d2a-1bbe42992088" 00:12:48.799 ], 00:12:48.799 "product_name": "Raid Volume", 00:12:48.799 "block_size": 512, 00:12:48.799 "num_blocks": 131072, 00:12:48.799 "uuid": "29dd3091-8aa1-4100-8d2a-1bbe42992088", 00:12:48.799 "assigned_rate_limits": { 00:12:48.799 "rw_ios_per_sec": 0, 00:12:48.799 "rw_mbytes_per_sec": 0, 00:12:48.799 "r_mbytes_per_sec": 0, 00:12:48.799 "w_mbytes_per_sec": 0 00:12:48.799 }, 00:12:48.799 "claimed": false, 00:12:48.799 "zoned": false, 00:12:48.799 "supported_io_types": { 00:12:48.799 "read": true, 00:12:48.799 "write": true, 00:12:48.799 "unmap": true, 00:12:48.799 "flush": true, 00:12:48.799 "reset": true, 00:12:48.799 "nvme_admin": false, 00:12:48.799 "nvme_io": false, 00:12:48.799 "nvme_io_md": false, 00:12:48.799 "write_zeroes": true, 00:12:48.799 "zcopy": false, 00:12:48.799 "get_zone_info": false, 00:12:48.799 "zone_management": false, 00:12:48.799 "zone_append": false, 00:12:48.799 "compare": false, 00:12:48.799 "compare_and_write": false, 00:12:48.799 "abort": false, 00:12:48.799 "seek_hole": false, 00:12:48.799 "seek_data": false, 00:12:48.799 "copy": false, 00:12:48.799 "nvme_iov_md": false 00:12:48.799 }, 00:12:48.799 "memory_domains": [ 00:12:48.799 { 00:12:48.799 "dma_device_id": "system", 00:12:48.799 "dma_device_type": 1 00:12:48.799 }, 00:12:48.799 { 00:12:48.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.799 "dma_device_type": 2 00:12:48.799 }, 00:12:48.799 { 00:12:48.799 "dma_device_id": "system", 00:12:48.799 "dma_device_type": 1 00:12:48.799 }, 00:12:48.799 { 00:12:48.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.799 "dma_device_type": 2 00:12:48.799 } 00:12:48.800 ], 00:12:48.800 "driver_specific": { 00:12:48.800 "raid": { 00:12:48.800 "uuid": "29dd3091-8aa1-4100-8d2a-1bbe42992088", 00:12:48.800 "strip_size_kb": 64, 00:12:48.800 "state": "online", 00:12:48.800 "raid_level": "raid0", 00:12:48.800 "superblock": false, 00:12:48.800 "num_base_bdevs": 2, 00:12:48.800 "num_base_bdevs_discovered": 2, 00:12:48.800 "num_base_bdevs_operational": 2, 00:12:48.800 "base_bdevs_list": [ 00:12:48.800 { 00:12:48.800 "name": "BaseBdev1", 00:12:48.800 "uuid": "7f1ae019-6d41-4f53-8091-e898f9117cb8", 00:12:48.800 "is_configured": true, 00:12:48.800 "data_offset": 0, 00:12:48.800 "data_size": 65536 00:12:48.800 }, 00:12:48.800 { 00:12:48.800 "name": "BaseBdev2", 00:12:48.800 "uuid": "0b2a53f1-4136-451f-8daa-016a0a43ddd2", 00:12:48.800 "is_configured": true, 00:12:48.800 "data_offset": 0, 00:12:48.800 "data_size": 65536 00:12:48.800 } 00:12:48.800 ] 00:12:48.800 } 00:12:48.800 } 00:12:48.800 }' 00:12:48.800 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:48.800 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:48.800 BaseBdev2' 00:12:48.800 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.800 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:48.800 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.800 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:48.800 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.800 09:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.800 09:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.800 09:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.800 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.800 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.800 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.800 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:48.800 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.800 09:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.800 09:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.800 09:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.058 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.058 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.058 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:49.058 09:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.058 09:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.058 [2024-10-15 09:12:32.760963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.058 [2024-10-15 09:12:32.761013] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:49.058 [2024-10-15 09:12:32.761095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.058 09:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.058 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:49.058 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.059 "name": "Existed_Raid", 00:12:49.059 "uuid": "29dd3091-8aa1-4100-8d2a-1bbe42992088", 00:12:49.059 "strip_size_kb": 64, 00:12:49.059 "state": "offline", 00:12:49.059 "raid_level": "raid0", 00:12:49.059 "superblock": false, 00:12:49.059 "num_base_bdevs": 2, 00:12:49.059 "num_base_bdevs_discovered": 1, 00:12:49.059 "num_base_bdevs_operational": 1, 00:12:49.059 "base_bdevs_list": [ 00:12:49.059 { 00:12:49.059 "name": null, 00:12:49.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.059 "is_configured": false, 00:12:49.059 "data_offset": 0, 00:12:49.059 "data_size": 65536 00:12:49.059 }, 00:12:49.059 { 00:12:49.059 "name": "BaseBdev2", 00:12:49.059 "uuid": "0b2a53f1-4136-451f-8daa-016a0a43ddd2", 00:12:49.059 "is_configured": true, 00:12:49.059 "data_offset": 0, 00:12:49.059 "data_size": 65536 00:12:49.059 } 00:12:49.059 ] 00:12:49.059 }' 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.059 09:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.626 [2024-10-15 09:12:33.385679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:49.626 [2024-10-15 09:12:33.385760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60822 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 60822 ']' 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 60822 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:49.626 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60822 00:12:49.884 killing process with pid 60822 00:12:49.884 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:49.884 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:49.884 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60822' 00:12:49.884 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 60822 00:12:49.884 [2024-10-15 09:12:33.580405] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:49.884 09:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 60822 00:12:49.884 [2024-10-15 09:12:33.595727] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:50.820 ************************************ 00:12:50.820 END TEST raid_state_function_test 00:12:50.820 ************************************ 00:12:50.820 09:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:50.820 00:12:50.820 real 0m5.685s 00:12:50.820 user 0m8.524s 00:12:50.820 sys 0m0.829s 00:12:50.820 09:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.820 09:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.079 09:12:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:12:51.079 09:12:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:51.079 09:12:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:51.079 09:12:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:51.079 ************************************ 00:12:51.079 START TEST raid_state_function_test_sb 00:12:51.079 ************************************ 00:12:51.079 09:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:12:51.079 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:51.079 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:51.079 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:51.080 Process raid pid: 61081 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61081 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61081' 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61081 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 61081 ']' 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.080 09:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.080 [2024-10-15 09:12:34.880088] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:12:51.080 [2024-10-15 09:12:34.880568] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.339 [2024-10-15 09:12:35.059658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.597 [2024-10-15 09:12:35.269618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.597 [2024-10-15 09:12:35.495362] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.597 [2024-10-15 09:12:35.495700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.163 [2024-10-15 09:12:35.898966] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:52.163 [2024-10-15 09:12:35.899041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:52.163 [2024-10-15 09:12:35.899059] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:52.163 [2024-10-15 09:12:35.899076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.163 "name": "Existed_Raid", 00:12:52.163 "uuid": "4b1c0c26-c42a-42cb-8f35-78cf57250c87", 00:12:52.163 "strip_size_kb": 64, 00:12:52.163 "state": "configuring", 00:12:52.163 "raid_level": "raid0", 00:12:52.163 "superblock": true, 00:12:52.163 "num_base_bdevs": 2, 00:12:52.163 "num_base_bdevs_discovered": 0, 00:12:52.163 "num_base_bdevs_operational": 2, 00:12:52.163 "base_bdevs_list": [ 00:12:52.163 { 00:12:52.163 "name": "BaseBdev1", 00:12:52.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.163 "is_configured": false, 00:12:52.163 "data_offset": 0, 00:12:52.163 "data_size": 0 00:12:52.163 }, 00:12:52.163 { 00:12:52.163 "name": "BaseBdev2", 00:12:52.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.163 "is_configured": false, 00:12:52.163 "data_offset": 0, 00:12:52.163 "data_size": 0 00:12:52.163 } 00:12:52.163 ] 00:12:52.163 }' 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.163 09:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.730 09:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:52.730 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.730 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.730 [2024-10-15 09:12:36.402980] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:52.730 [2024-10-15 09:12:36.403192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:52.730 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.730 09:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:52.730 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.730 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.730 [2024-10-15 09:12:36.411017] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:52.730 [2024-10-15 09:12:36.411075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:52.731 [2024-10-15 09:12:36.411092] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:52.731 [2024-10-15 09:12:36.411112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.731 [2024-10-15 09:12:36.459477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.731 BaseBdev1 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.731 [ 00:12:52.731 { 00:12:52.731 "name": "BaseBdev1", 00:12:52.731 "aliases": [ 00:12:52.731 "9aea2951-dcec-4bf5-86e5-488e6ebb075d" 00:12:52.731 ], 00:12:52.731 "product_name": "Malloc disk", 00:12:52.731 "block_size": 512, 00:12:52.731 "num_blocks": 65536, 00:12:52.731 "uuid": "9aea2951-dcec-4bf5-86e5-488e6ebb075d", 00:12:52.731 "assigned_rate_limits": { 00:12:52.731 "rw_ios_per_sec": 0, 00:12:52.731 "rw_mbytes_per_sec": 0, 00:12:52.731 "r_mbytes_per_sec": 0, 00:12:52.731 "w_mbytes_per_sec": 0 00:12:52.731 }, 00:12:52.731 "claimed": true, 00:12:52.731 "claim_type": "exclusive_write", 00:12:52.731 "zoned": false, 00:12:52.731 "supported_io_types": { 00:12:52.731 "read": true, 00:12:52.731 "write": true, 00:12:52.731 "unmap": true, 00:12:52.731 "flush": true, 00:12:52.731 "reset": true, 00:12:52.731 "nvme_admin": false, 00:12:52.731 "nvme_io": false, 00:12:52.731 "nvme_io_md": false, 00:12:52.731 "write_zeroes": true, 00:12:52.731 "zcopy": true, 00:12:52.731 "get_zone_info": false, 00:12:52.731 "zone_management": false, 00:12:52.731 "zone_append": false, 00:12:52.731 "compare": false, 00:12:52.731 "compare_and_write": false, 00:12:52.731 "abort": true, 00:12:52.731 "seek_hole": false, 00:12:52.731 "seek_data": false, 00:12:52.731 "copy": true, 00:12:52.731 "nvme_iov_md": false 00:12:52.731 }, 00:12:52.731 "memory_domains": [ 00:12:52.731 { 00:12:52.731 "dma_device_id": "system", 00:12:52.731 "dma_device_type": 1 00:12:52.731 }, 00:12:52.731 { 00:12:52.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.731 "dma_device_type": 2 00:12:52.731 } 00:12:52.731 ], 00:12:52.731 "driver_specific": {} 00:12:52.731 } 00:12:52.731 ] 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.731 "name": "Existed_Raid", 00:12:52.731 "uuid": "4fe1dd63-37f7-4d4d-92d9-88b36f1aa93e", 00:12:52.731 "strip_size_kb": 64, 00:12:52.731 "state": "configuring", 00:12:52.731 "raid_level": "raid0", 00:12:52.731 "superblock": true, 00:12:52.731 "num_base_bdevs": 2, 00:12:52.731 "num_base_bdevs_discovered": 1, 00:12:52.731 "num_base_bdevs_operational": 2, 00:12:52.731 "base_bdevs_list": [ 00:12:52.731 { 00:12:52.731 "name": "BaseBdev1", 00:12:52.731 "uuid": "9aea2951-dcec-4bf5-86e5-488e6ebb075d", 00:12:52.731 "is_configured": true, 00:12:52.731 "data_offset": 2048, 00:12:52.731 "data_size": 63488 00:12:52.731 }, 00:12:52.731 { 00:12:52.731 "name": "BaseBdev2", 00:12:52.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.731 "is_configured": false, 00:12:52.731 "data_offset": 0, 00:12:52.731 "data_size": 0 00:12:52.731 } 00:12:52.731 ] 00:12:52.731 }' 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.731 09:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.298 [2024-10-15 09:12:37.015707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:53.298 [2024-10-15 09:12:37.015784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.298 [2024-10-15 09:12:37.023764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.298 [2024-10-15 09:12:37.026379] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:53.298 [2024-10-15 09:12:37.026439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.298 "name": "Existed_Raid", 00:12:53.298 "uuid": "de4339d2-e02c-46e9-9c8c-4e985c7636bd", 00:12:53.298 "strip_size_kb": 64, 00:12:53.298 "state": "configuring", 00:12:53.298 "raid_level": "raid0", 00:12:53.298 "superblock": true, 00:12:53.298 "num_base_bdevs": 2, 00:12:53.298 "num_base_bdevs_discovered": 1, 00:12:53.298 "num_base_bdevs_operational": 2, 00:12:53.298 "base_bdevs_list": [ 00:12:53.298 { 00:12:53.298 "name": "BaseBdev1", 00:12:53.298 "uuid": "9aea2951-dcec-4bf5-86e5-488e6ebb075d", 00:12:53.298 "is_configured": true, 00:12:53.298 "data_offset": 2048, 00:12:53.298 "data_size": 63488 00:12:53.298 }, 00:12:53.298 { 00:12:53.298 "name": "BaseBdev2", 00:12:53.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.298 "is_configured": false, 00:12:53.298 "data_offset": 0, 00:12:53.298 "data_size": 0 00:12:53.298 } 00:12:53.298 ] 00:12:53.298 }' 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.298 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.866 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:53.866 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.866 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.866 [2024-10-15 09:12:37.582213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:53.866 [2024-10-15 09:12:37.582565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:53.866 [2024-10-15 09:12:37.582586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:53.866 [2024-10-15 09:12:37.582929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:53.866 BaseBdev2 00:12:53.866 [2024-10-15 09:12:37.583142] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:53.866 [2024-10-15 09:12:37.583165] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:53.866 [2024-10-15 09:12:37.583342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.866 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.866 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:53.866 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:53.866 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:53.866 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:53.866 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:53.866 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:53.866 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:53.866 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.866 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.866 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.866 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:53.866 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.866 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.866 [ 00:12:53.866 { 00:12:53.866 "name": "BaseBdev2", 00:12:53.866 "aliases": [ 00:12:53.866 "243463c8-2521-492c-9f96-2dba90d50765" 00:12:53.866 ], 00:12:53.866 "product_name": "Malloc disk", 00:12:53.866 "block_size": 512, 00:12:53.866 "num_blocks": 65536, 00:12:53.866 "uuid": "243463c8-2521-492c-9f96-2dba90d50765", 00:12:53.866 "assigned_rate_limits": { 00:12:53.866 "rw_ios_per_sec": 0, 00:12:53.866 "rw_mbytes_per_sec": 0, 00:12:53.866 "r_mbytes_per_sec": 0, 00:12:53.866 "w_mbytes_per_sec": 0 00:12:53.866 }, 00:12:53.866 "claimed": true, 00:12:53.866 "claim_type": "exclusive_write", 00:12:53.866 "zoned": false, 00:12:53.866 "supported_io_types": { 00:12:53.866 "read": true, 00:12:53.866 "write": true, 00:12:53.866 "unmap": true, 00:12:53.866 "flush": true, 00:12:53.866 "reset": true, 00:12:53.866 "nvme_admin": false, 00:12:53.866 "nvme_io": false, 00:12:53.866 "nvme_io_md": false, 00:12:53.866 "write_zeroes": true, 00:12:53.866 "zcopy": true, 00:12:53.866 "get_zone_info": false, 00:12:53.866 "zone_management": false, 00:12:53.866 "zone_append": false, 00:12:53.866 "compare": false, 00:12:53.866 "compare_and_write": false, 00:12:53.866 "abort": true, 00:12:53.866 "seek_hole": false, 00:12:53.866 "seek_data": false, 00:12:53.866 "copy": true, 00:12:53.866 "nvme_iov_md": false 00:12:53.866 }, 00:12:53.866 "memory_domains": [ 00:12:53.866 { 00:12:53.866 "dma_device_id": "system", 00:12:53.866 "dma_device_type": 1 00:12:53.866 }, 00:12:53.866 { 00:12:53.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.867 "dma_device_type": 2 00:12:53.867 } 00:12:53.867 ], 00:12:53.867 "driver_specific": {} 00:12:53.867 } 00:12:53.867 ] 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.867 "name": "Existed_Raid", 00:12:53.867 "uuid": "de4339d2-e02c-46e9-9c8c-4e985c7636bd", 00:12:53.867 "strip_size_kb": 64, 00:12:53.867 "state": "online", 00:12:53.867 "raid_level": "raid0", 00:12:53.867 "superblock": true, 00:12:53.867 "num_base_bdevs": 2, 00:12:53.867 "num_base_bdevs_discovered": 2, 00:12:53.867 "num_base_bdevs_operational": 2, 00:12:53.867 "base_bdevs_list": [ 00:12:53.867 { 00:12:53.867 "name": "BaseBdev1", 00:12:53.867 "uuid": "9aea2951-dcec-4bf5-86e5-488e6ebb075d", 00:12:53.867 "is_configured": true, 00:12:53.867 "data_offset": 2048, 00:12:53.867 "data_size": 63488 00:12:53.867 }, 00:12:53.867 { 00:12:53.867 "name": "BaseBdev2", 00:12:53.867 "uuid": "243463c8-2521-492c-9f96-2dba90d50765", 00:12:53.867 "is_configured": true, 00:12:53.867 "data_offset": 2048, 00:12:53.867 "data_size": 63488 00:12:53.867 } 00:12:53.867 ] 00:12:53.867 }' 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.867 09:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.435 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:54.435 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:54.435 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:54.435 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:54.435 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:54.435 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:54.435 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:54.435 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:54.435 09:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.435 09:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.435 [2024-10-15 09:12:38.158804] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.435 09:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.435 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:54.435 "name": "Existed_Raid", 00:12:54.435 "aliases": [ 00:12:54.435 "de4339d2-e02c-46e9-9c8c-4e985c7636bd" 00:12:54.435 ], 00:12:54.435 "product_name": "Raid Volume", 00:12:54.435 "block_size": 512, 00:12:54.435 "num_blocks": 126976, 00:12:54.435 "uuid": "de4339d2-e02c-46e9-9c8c-4e985c7636bd", 00:12:54.435 "assigned_rate_limits": { 00:12:54.435 "rw_ios_per_sec": 0, 00:12:54.435 "rw_mbytes_per_sec": 0, 00:12:54.435 "r_mbytes_per_sec": 0, 00:12:54.435 "w_mbytes_per_sec": 0 00:12:54.435 }, 00:12:54.435 "claimed": false, 00:12:54.435 "zoned": false, 00:12:54.435 "supported_io_types": { 00:12:54.435 "read": true, 00:12:54.435 "write": true, 00:12:54.435 "unmap": true, 00:12:54.435 "flush": true, 00:12:54.435 "reset": true, 00:12:54.435 "nvme_admin": false, 00:12:54.435 "nvme_io": false, 00:12:54.435 "nvme_io_md": false, 00:12:54.435 "write_zeroes": true, 00:12:54.435 "zcopy": false, 00:12:54.435 "get_zone_info": false, 00:12:54.435 "zone_management": false, 00:12:54.435 "zone_append": false, 00:12:54.435 "compare": false, 00:12:54.435 "compare_and_write": false, 00:12:54.435 "abort": false, 00:12:54.435 "seek_hole": false, 00:12:54.435 "seek_data": false, 00:12:54.435 "copy": false, 00:12:54.435 "nvme_iov_md": false 00:12:54.435 }, 00:12:54.435 "memory_domains": [ 00:12:54.435 { 00:12:54.435 "dma_device_id": "system", 00:12:54.435 "dma_device_type": 1 00:12:54.435 }, 00:12:54.435 { 00:12:54.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.435 "dma_device_type": 2 00:12:54.435 }, 00:12:54.435 { 00:12:54.435 "dma_device_id": "system", 00:12:54.435 "dma_device_type": 1 00:12:54.435 }, 00:12:54.435 { 00:12:54.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.435 "dma_device_type": 2 00:12:54.435 } 00:12:54.435 ], 00:12:54.435 "driver_specific": { 00:12:54.435 "raid": { 00:12:54.435 "uuid": "de4339d2-e02c-46e9-9c8c-4e985c7636bd", 00:12:54.435 "strip_size_kb": 64, 00:12:54.435 "state": "online", 00:12:54.435 "raid_level": "raid0", 00:12:54.435 "superblock": true, 00:12:54.435 "num_base_bdevs": 2, 00:12:54.435 "num_base_bdevs_discovered": 2, 00:12:54.435 "num_base_bdevs_operational": 2, 00:12:54.435 "base_bdevs_list": [ 00:12:54.435 { 00:12:54.435 "name": "BaseBdev1", 00:12:54.435 "uuid": "9aea2951-dcec-4bf5-86e5-488e6ebb075d", 00:12:54.435 "is_configured": true, 00:12:54.435 "data_offset": 2048, 00:12:54.435 "data_size": 63488 00:12:54.435 }, 00:12:54.435 { 00:12:54.435 "name": "BaseBdev2", 00:12:54.435 "uuid": "243463c8-2521-492c-9f96-2dba90d50765", 00:12:54.435 "is_configured": true, 00:12:54.435 "data_offset": 2048, 00:12:54.435 "data_size": 63488 00:12:54.435 } 00:12:54.436 ] 00:12:54.436 } 00:12:54.436 } 00:12:54.436 }' 00:12:54.436 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:54.436 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:54.436 BaseBdev2' 00:12:54.436 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.436 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:54.436 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.436 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.436 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:54.436 09:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.436 09:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.436 09:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.704 [2024-10-15 09:12:38.418686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.704 [2024-10-15 09:12:38.418735] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.704 [2024-10-15 09:12:38.418814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.704 "name": "Existed_Raid", 00:12:54.704 "uuid": "de4339d2-e02c-46e9-9c8c-4e985c7636bd", 00:12:54.704 "strip_size_kb": 64, 00:12:54.704 "state": "offline", 00:12:54.704 "raid_level": "raid0", 00:12:54.704 "superblock": true, 00:12:54.704 "num_base_bdevs": 2, 00:12:54.704 "num_base_bdevs_discovered": 1, 00:12:54.704 "num_base_bdevs_operational": 1, 00:12:54.704 "base_bdevs_list": [ 00:12:54.704 { 00:12:54.704 "name": null, 00:12:54.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.704 "is_configured": false, 00:12:54.704 "data_offset": 0, 00:12:54.704 "data_size": 63488 00:12:54.704 }, 00:12:54.704 { 00:12:54.704 "name": "BaseBdev2", 00:12:54.704 "uuid": "243463c8-2521-492c-9f96-2dba90d50765", 00:12:54.704 "is_configured": true, 00:12:54.704 "data_offset": 2048, 00:12:54.704 "data_size": 63488 00:12:54.704 } 00:12:54.704 ] 00:12:54.704 }' 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.704 09:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.272 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:55.272 09:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.272 [2024-10-15 09:12:39.047812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:55.272 [2024-10-15 09:12:39.047906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61081 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 61081 ']' 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 61081 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:55.272 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61081 00:12:55.530 killing process with pid 61081 00:12:55.530 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:55.530 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:55.530 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61081' 00:12:55.530 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 61081 00:12:55.530 [2024-10-15 09:12:39.223994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:55.530 09:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 61081 00:12:55.530 [2024-10-15 09:12:39.239349] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:56.464 09:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:56.464 00:12:56.464 real 0m5.588s 00:12:56.464 user 0m8.307s 00:12:56.464 sys 0m0.849s 00:12:56.464 09:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:56.464 09:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.464 ************************************ 00:12:56.464 END TEST raid_state_function_test_sb 00:12:56.464 ************************************ 00:12:56.464 09:12:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:12:56.723 09:12:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:56.723 09:12:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:56.723 09:12:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:56.723 ************************************ 00:12:56.723 START TEST raid_superblock_test 00:12:56.723 ************************************ 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61333 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61333 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61333 ']' 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:56.723 09:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.723 [2024-10-15 09:12:40.514440] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:12:56.723 [2024-10-15 09:12:40.514913] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61333 ] 00:12:56.981 [2024-10-15 09:12:40.683480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.981 [2024-10-15 09:12:40.850045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.241 [2024-10-15 09:12:41.093602] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:57.241 [2024-10-15 09:12:41.093898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.809 malloc1 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.809 [2024-10-15 09:12:41.533413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:57.809 [2024-10-15 09:12:41.533652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.809 [2024-10-15 09:12:41.533735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:57.809 [2024-10-15 09:12:41.533908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.809 [2024-10-15 09:12:41.536935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.809 [2024-10-15 09:12:41.537098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:57.809 pt1 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.809 malloc2 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.809 [2024-10-15 09:12:41.593269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:57.809 [2024-10-15 09:12:41.593347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.809 [2024-10-15 09:12:41.593382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:57.809 [2024-10-15 09:12:41.593397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.809 [2024-10-15 09:12:41.596430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.809 [2024-10-15 09:12:41.596475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:57.809 pt2 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.809 [2024-10-15 09:12:41.601394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:57.809 [2024-10-15 09:12:41.603979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:57.809 [2024-10-15 09:12:41.604202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:57.809 [2024-10-15 09:12:41.604222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:57.809 [2024-10-15 09:12:41.604527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:57.809 [2024-10-15 09:12:41.604749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:57.809 [2024-10-15 09:12:41.604772] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:57.809 [2024-10-15 09:12:41.604949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.809 "name": "raid_bdev1", 00:12:57.809 "uuid": "24941265-70cd-4379-9fdd-46c0ede34e3f", 00:12:57.809 "strip_size_kb": 64, 00:12:57.809 "state": "online", 00:12:57.809 "raid_level": "raid0", 00:12:57.809 "superblock": true, 00:12:57.809 "num_base_bdevs": 2, 00:12:57.809 "num_base_bdevs_discovered": 2, 00:12:57.809 "num_base_bdevs_operational": 2, 00:12:57.809 "base_bdevs_list": [ 00:12:57.809 { 00:12:57.809 "name": "pt1", 00:12:57.809 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.809 "is_configured": true, 00:12:57.809 "data_offset": 2048, 00:12:57.809 "data_size": 63488 00:12:57.809 }, 00:12:57.809 { 00:12:57.809 "name": "pt2", 00:12:57.809 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.809 "is_configured": true, 00:12:57.809 "data_offset": 2048, 00:12:57.809 "data_size": 63488 00:12:57.809 } 00:12:57.809 ] 00:12:57.809 }' 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.809 09:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.376 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:58.377 [2024-10-15 09:12:42.109905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:58.377 "name": "raid_bdev1", 00:12:58.377 "aliases": [ 00:12:58.377 "24941265-70cd-4379-9fdd-46c0ede34e3f" 00:12:58.377 ], 00:12:58.377 "product_name": "Raid Volume", 00:12:58.377 "block_size": 512, 00:12:58.377 "num_blocks": 126976, 00:12:58.377 "uuid": "24941265-70cd-4379-9fdd-46c0ede34e3f", 00:12:58.377 "assigned_rate_limits": { 00:12:58.377 "rw_ios_per_sec": 0, 00:12:58.377 "rw_mbytes_per_sec": 0, 00:12:58.377 "r_mbytes_per_sec": 0, 00:12:58.377 "w_mbytes_per_sec": 0 00:12:58.377 }, 00:12:58.377 "claimed": false, 00:12:58.377 "zoned": false, 00:12:58.377 "supported_io_types": { 00:12:58.377 "read": true, 00:12:58.377 "write": true, 00:12:58.377 "unmap": true, 00:12:58.377 "flush": true, 00:12:58.377 "reset": true, 00:12:58.377 "nvme_admin": false, 00:12:58.377 "nvme_io": false, 00:12:58.377 "nvme_io_md": false, 00:12:58.377 "write_zeroes": true, 00:12:58.377 "zcopy": false, 00:12:58.377 "get_zone_info": false, 00:12:58.377 "zone_management": false, 00:12:58.377 "zone_append": false, 00:12:58.377 "compare": false, 00:12:58.377 "compare_and_write": false, 00:12:58.377 "abort": false, 00:12:58.377 "seek_hole": false, 00:12:58.377 "seek_data": false, 00:12:58.377 "copy": false, 00:12:58.377 "nvme_iov_md": false 00:12:58.377 }, 00:12:58.377 "memory_domains": [ 00:12:58.377 { 00:12:58.377 "dma_device_id": "system", 00:12:58.377 "dma_device_type": 1 00:12:58.377 }, 00:12:58.377 { 00:12:58.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.377 "dma_device_type": 2 00:12:58.377 }, 00:12:58.377 { 00:12:58.377 "dma_device_id": "system", 00:12:58.377 "dma_device_type": 1 00:12:58.377 }, 00:12:58.377 { 00:12:58.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.377 "dma_device_type": 2 00:12:58.377 } 00:12:58.377 ], 00:12:58.377 "driver_specific": { 00:12:58.377 "raid": { 00:12:58.377 "uuid": "24941265-70cd-4379-9fdd-46c0ede34e3f", 00:12:58.377 "strip_size_kb": 64, 00:12:58.377 "state": "online", 00:12:58.377 "raid_level": "raid0", 00:12:58.377 "superblock": true, 00:12:58.377 "num_base_bdevs": 2, 00:12:58.377 "num_base_bdevs_discovered": 2, 00:12:58.377 "num_base_bdevs_operational": 2, 00:12:58.377 "base_bdevs_list": [ 00:12:58.377 { 00:12:58.377 "name": "pt1", 00:12:58.377 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.377 "is_configured": true, 00:12:58.377 "data_offset": 2048, 00:12:58.377 "data_size": 63488 00:12:58.377 }, 00:12:58.377 { 00:12:58.377 "name": "pt2", 00:12:58.377 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.377 "is_configured": true, 00:12:58.377 "data_offset": 2048, 00:12:58.377 "data_size": 63488 00:12:58.377 } 00:12:58.377 ] 00:12:58.377 } 00:12:58.377 } 00:12:58.377 }' 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:58.377 pt2' 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.377 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:58.636 [2024-10-15 09:12:42.369935] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=24941265-70cd-4379-9fdd-46c0ede34e3f 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 24941265-70cd-4379-9fdd-46c0ede34e3f ']' 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.636 [2024-10-15 09:12:42.421641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:58.636 [2024-10-15 09:12:42.421680] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.636 [2024-10-15 09:12:42.421812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.636 [2024-10-15 09:12:42.421883] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.636 [2024-10-15 09:12:42.421904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:58.636 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.895 [2024-10-15 09:12:42.569689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:58.895 [2024-10-15 09:12:42.572475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:58.895 [2024-10-15 09:12:42.572573] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:58.895 [2024-10-15 09:12:42.572664] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:58.895 [2024-10-15 09:12:42.572693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:58.895 [2024-10-15 09:12:42.572710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:58.895 request: 00:12:58.895 { 00:12:58.895 "name": "raid_bdev1", 00:12:58.895 "raid_level": "raid0", 00:12:58.895 "base_bdevs": [ 00:12:58.895 "malloc1", 00:12:58.895 "malloc2" 00:12:58.895 ], 00:12:58.895 "strip_size_kb": 64, 00:12:58.895 "superblock": false, 00:12:58.895 "method": "bdev_raid_create", 00:12:58.895 "req_id": 1 00:12:58.895 } 00:12:58.895 Got JSON-RPC error response 00:12:58.895 response: 00:12:58.895 { 00:12:58.895 "code": -17, 00:12:58.895 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:58.895 } 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.895 [2024-10-15 09:12:42.629602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:58.895 [2024-10-15 09:12:42.629810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.895 [2024-10-15 09:12:42.629949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:58.895 [2024-10-15 09:12:42.630134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.895 [2024-10-15 09:12:42.633327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.895 [2024-10-15 09:12:42.633376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:58.895 [2024-10-15 09:12:42.633500] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:58.895 [2024-10-15 09:12:42.633587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:58.895 pt1 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.895 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.896 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.896 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.896 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.896 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.896 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.896 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.896 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.896 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.896 "name": "raid_bdev1", 00:12:58.896 "uuid": "24941265-70cd-4379-9fdd-46c0ede34e3f", 00:12:58.896 "strip_size_kb": 64, 00:12:58.896 "state": "configuring", 00:12:58.896 "raid_level": "raid0", 00:12:58.896 "superblock": true, 00:12:58.896 "num_base_bdevs": 2, 00:12:58.896 "num_base_bdevs_discovered": 1, 00:12:58.896 "num_base_bdevs_operational": 2, 00:12:58.896 "base_bdevs_list": [ 00:12:58.896 { 00:12:58.896 "name": "pt1", 00:12:58.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.896 "is_configured": true, 00:12:58.896 "data_offset": 2048, 00:12:58.896 "data_size": 63488 00:12:58.896 }, 00:12:58.896 { 00:12:58.896 "name": null, 00:12:58.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.896 "is_configured": false, 00:12:58.896 "data_offset": 2048, 00:12:58.896 "data_size": 63488 00:12:58.896 } 00:12:58.896 ] 00:12:58.896 }' 00:12:58.896 09:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.896 09:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.463 [2024-10-15 09:12:43.142045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:59.463 [2024-10-15 09:12:43.142314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.463 [2024-10-15 09:12:43.142364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:59.463 [2024-10-15 09:12:43.142386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.463 [2024-10-15 09:12:43.143072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.463 [2024-10-15 09:12:43.143131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:59.463 [2024-10-15 09:12:43.143252] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:59.463 [2024-10-15 09:12:43.143292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:59.463 [2024-10-15 09:12:43.143465] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:59.463 [2024-10-15 09:12:43.143487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:59.463 [2024-10-15 09:12:43.143789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:59.463 [2024-10-15 09:12:43.143993] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:59.463 [2024-10-15 09:12:43.144009] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:59.463 [2024-10-15 09:12:43.144201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.463 pt2 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.463 "name": "raid_bdev1", 00:12:59.463 "uuid": "24941265-70cd-4379-9fdd-46c0ede34e3f", 00:12:59.463 "strip_size_kb": 64, 00:12:59.463 "state": "online", 00:12:59.463 "raid_level": "raid0", 00:12:59.463 "superblock": true, 00:12:59.463 "num_base_bdevs": 2, 00:12:59.463 "num_base_bdevs_discovered": 2, 00:12:59.463 "num_base_bdevs_operational": 2, 00:12:59.463 "base_bdevs_list": [ 00:12:59.463 { 00:12:59.463 "name": "pt1", 00:12:59.463 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:59.463 "is_configured": true, 00:12:59.463 "data_offset": 2048, 00:12:59.463 "data_size": 63488 00:12:59.463 }, 00:12:59.463 { 00:12:59.463 "name": "pt2", 00:12:59.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.463 "is_configured": true, 00:12:59.463 "data_offset": 2048, 00:12:59.463 "data_size": 63488 00:12:59.463 } 00:12:59.463 ] 00:12:59.463 }' 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.463 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.031 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:00.031 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:00.031 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:00.031 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:00.031 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:00.031 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:00.031 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:00.031 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.031 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:00.031 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.031 [2024-10-15 09:12:43.658532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:00.031 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.031 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:00.031 "name": "raid_bdev1", 00:13:00.031 "aliases": [ 00:13:00.031 "24941265-70cd-4379-9fdd-46c0ede34e3f" 00:13:00.031 ], 00:13:00.031 "product_name": "Raid Volume", 00:13:00.031 "block_size": 512, 00:13:00.031 "num_blocks": 126976, 00:13:00.031 "uuid": "24941265-70cd-4379-9fdd-46c0ede34e3f", 00:13:00.031 "assigned_rate_limits": { 00:13:00.031 "rw_ios_per_sec": 0, 00:13:00.031 "rw_mbytes_per_sec": 0, 00:13:00.031 "r_mbytes_per_sec": 0, 00:13:00.031 "w_mbytes_per_sec": 0 00:13:00.031 }, 00:13:00.031 "claimed": false, 00:13:00.031 "zoned": false, 00:13:00.031 "supported_io_types": { 00:13:00.031 "read": true, 00:13:00.031 "write": true, 00:13:00.031 "unmap": true, 00:13:00.031 "flush": true, 00:13:00.031 "reset": true, 00:13:00.031 "nvme_admin": false, 00:13:00.031 "nvme_io": false, 00:13:00.031 "nvme_io_md": false, 00:13:00.031 "write_zeroes": true, 00:13:00.031 "zcopy": false, 00:13:00.031 "get_zone_info": false, 00:13:00.031 "zone_management": false, 00:13:00.031 "zone_append": false, 00:13:00.031 "compare": false, 00:13:00.031 "compare_and_write": false, 00:13:00.031 "abort": false, 00:13:00.031 "seek_hole": false, 00:13:00.031 "seek_data": false, 00:13:00.031 "copy": false, 00:13:00.031 "nvme_iov_md": false 00:13:00.031 }, 00:13:00.031 "memory_domains": [ 00:13:00.031 { 00:13:00.031 "dma_device_id": "system", 00:13:00.031 "dma_device_type": 1 00:13:00.031 }, 00:13:00.031 { 00:13:00.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.031 "dma_device_type": 2 00:13:00.031 }, 00:13:00.031 { 00:13:00.031 "dma_device_id": "system", 00:13:00.031 "dma_device_type": 1 00:13:00.031 }, 00:13:00.031 { 00:13:00.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.031 "dma_device_type": 2 00:13:00.031 } 00:13:00.031 ], 00:13:00.031 "driver_specific": { 00:13:00.031 "raid": { 00:13:00.031 "uuid": "24941265-70cd-4379-9fdd-46c0ede34e3f", 00:13:00.031 "strip_size_kb": 64, 00:13:00.031 "state": "online", 00:13:00.031 "raid_level": "raid0", 00:13:00.031 "superblock": true, 00:13:00.031 "num_base_bdevs": 2, 00:13:00.031 "num_base_bdevs_discovered": 2, 00:13:00.031 "num_base_bdevs_operational": 2, 00:13:00.031 "base_bdevs_list": [ 00:13:00.031 { 00:13:00.031 "name": "pt1", 00:13:00.031 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:00.031 "is_configured": true, 00:13:00.031 "data_offset": 2048, 00:13:00.031 "data_size": 63488 00:13:00.031 }, 00:13:00.031 { 00:13:00.031 "name": "pt2", 00:13:00.031 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.031 "is_configured": true, 00:13:00.031 "data_offset": 2048, 00:13:00.031 "data_size": 63488 00:13:00.031 } 00:13:00.031 ] 00:13:00.031 } 00:13:00.031 } 00:13:00.031 }' 00:13:00.031 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:00.031 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:00.031 pt2' 00:13:00.031 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.031 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:00.031 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.032 [2024-10-15 09:12:43.926636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:00.032 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.291 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 24941265-70cd-4379-9fdd-46c0ede34e3f '!=' 24941265-70cd-4379-9fdd-46c0ede34e3f ']' 00:13:00.291 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:13:00.291 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:00.291 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:00.291 09:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61333 00:13:00.291 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61333 ']' 00:13:00.291 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61333 00:13:00.291 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:00.291 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:00.291 09:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61333 00:13:00.291 killing process with pid 61333 00:13:00.291 09:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:00.291 09:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:00.291 09:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61333' 00:13:00.291 09:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 61333 00:13:00.291 [2024-10-15 09:12:44.006072] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:00.291 [2024-10-15 09:12:44.006252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.291 09:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 61333 00:13:00.291 [2024-10-15 09:12:44.006329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.291 [2024-10-15 09:12:44.006349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:00.291 [2024-10-15 09:12:44.212341] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:01.668 09:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:01.668 00:13:01.668 real 0m4.928s 00:13:01.668 user 0m7.121s 00:13:01.668 sys 0m0.771s 00:13:01.668 09:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:01.668 09:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.668 ************************************ 00:13:01.668 END TEST raid_superblock_test 00:13:01.668 ************************************ 00:13:01.668 09:12:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:13:01.668 09:12:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:01.668 09:12:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:01.668 09:12:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:01.668 ************************************ 00:13:01.668 START TEST raid_read_error_test 00:13:01.668 ************************************ 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:01.668 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ecC8oRJiuh 00:13:01.669 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61550 00:13:01.669 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61550 00:13:01.669 09:12:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:01.669 09:12:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 61550 ']' 00:13:01.669 09:12:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.669 09:12:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:01.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.669 09:12:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.669 09:12:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:01.669 09:12:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.669 [2024-10-15 09:12:45.513003] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:13:01.669 [2024-10-15 09:12:45.513234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61550 ] 00:13:01.928 [2024-10-15 09:12:45.694791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.928 [2024-10-15 09:12:45.848893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.186 [2024-10-15 09:12:46.075737] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.186 [2024-10-15 09:12:46.075814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.754 BaseBdev1_malloc 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.754 true 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.754 [2024-10-15 09:12:46.561890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:02.754 [2024-10-15 09:12:46.562043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.754 [2024-10-15 09:12:46.562099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:02.754 [2024-10-15 09:12:46.562170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.754 [2024-10-15 09:12:46.566252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.754 [2024-10-15 09:12:46.566318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:02.754 BaseBdev1 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.754 BaseBdev2_malloc 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.754 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.755 true 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.755 [2024-10-15 09:12:46.639174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:02.755 [2024-10-15 09:12:46.639247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.755 [2024-10-15 09:12:46.639274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:02.755 [2024-10-15 09:12:46.639300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.755 [2024-10-15 09:12:46.642357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.755 [2024-10-15 09:12:46.642527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:02.755 BaseBdev2 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.755 [2024-10-15 09:12:46.647382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:02.755 [2024-10-15 09:12:46.650019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.755 [2024-10-15 09:12:46.650313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:02.755 [2024-10-15 09:12:46.650339] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:02.755 [2024-10-15 09:12:46.650655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:02.755 [2024-10-15 09:12:46.650888] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:02.755 [2024-10-15 09:12:46.650903] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:02.755 [2024-10-15 09:12:46.651087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.755 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.014 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.014 "name": "raid_bdev1", 00:13:03.014 "uuid": "6a6bfecc-d43d-417f-8092-fa42087da9cb", 00:13:03.014 "strip_size_kb": 64, 00:13:03.014 "state": "online", 00:13:03.014 "raid_level": "raid0", 00:13:03.014 "superblock": true, 00:13:03.014 "num_base_bdevs": 2, 00:13:03.014 "num_base_bdevs_discovered": 2, 00:13:03.014 "num_base_bdevs_operational": 2, 00:13:03.014 "base_bdevs_list": [ 00:13:03.014 { 00:13:03.014 "name": "BaseBdev1", 00:13:03.014 "uuid": "0f638682-74ad-57be-91a2-4c25bc54aba2", 00:13:03.014 "is_configured": true, 00:13:03.014 "data_offset": 2048, 00:13:03.014 "data_size": 63488 00:13:03.014 }, 00:13:03.014 { 00:13:03.014 "name": "BaseBdev2", 00:13:03.014 "uuid": "f737e4a5-6fe5-59cc-aff5-c26d2036da3d", 00:13:03.014 "is_configured": true, 00:13:03.014 "data_offset": 2048, 00:13:03.014 "data_size": 63488 00:13:03.014 } 00:13:03.014 ] 00:13:03.014 }' 00:13:03.014 09:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.014 09:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.273 09:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:03.273 09:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:03.532 [2024-10-15 09:12:47.289109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.517 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.518 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.518 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.518 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.518 "name": "raid_bdev1", 00:13:04.518 "uuid": "6a6bfecc-d43d-417f-8092-fa42087da9cb", 00:13:04.518 "strip_size_kb": 64, 00:13:04.518 "state": "online", 00:13:04.518 "raid_level": "raid0", 00:13:04.518 "superblock": true, 00:13:04.518 "num_base_bdevs": 2, 00:13:04.518 "num_base_bdevs_discovered": 2, 00:13:04.518 "num_base_bdevs_operational": 2, 00:13:04.518 "base_bdevs_list": [ 00:13:04.518 { 00:13:04.518 "name": "BaseBdev1", 00:13:04.518 "uuid": "0f638682-74ad-57be-91a2-4c25bc54aba2", 00:13:04.518 "is_configured": true, 00:13:04.518 "data_offset": 2048, 00:13:04.518 "data_size": 63488 00:13:04.518 }, 00:13:04.518 { 00:13:04.518 "name": "BaseBdev2", 00:13:04.518 "uuid": "f737e4a5-6fe5-59cc-aff5-c26d2036da3d", 00:13:04.518 "is_configured": true, 00:13:04.518 "data_offset": 2048, 00:13:04.518 "data_size": 63488 00:13:04.518 } 00:13:04.518 ] 00:13:04.518 }' 00:13:04.518 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.518 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.776 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:04.776 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.776 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.776 [2024-10-15 09:12:48.680992] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:04.776 [2024-10-15 09:12:48.681184] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:04.776 [2024-10-15 09:12:48.685093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.776 { 00:13:04.776 "results": [ 00:13:04.776 { 00:13:04.776 "job": "raid_bdev1", 00:13:04.776 "core_mask": "0x1", 00:13:04.776 "workload": "randrw", 00:13:04.776 "percentage": 50, 00:13:04.776 "status": "finished", 00:13:04.776 "queue_depth": 1, 00:13:04.776 "io_size": 131072, 00:13:04.776 "runtime": 1.389306, 00:13:04.776 "iops": 9922.220158841898, 00:13:04.776 "mibps": 1240.2775198552372, 00:13:04.776 "io_failed": 1, 00:13:04.776 "io_timeout": 0, 00:13:04.776 "avg_latency_us": 141.92070690951294, 00:13:04.776 "min_latency_us": 40.02909090909091, 00:13:04.776 "max_latency_us": 1936.290909090909 00:13:04.776 } 00:13:04.776 ], 00:13:04.776 "core_count": 1 00:13:04.776 } 00:13:04.776 [2024-10-15 09:12:48.685348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.776 [2024-10-15 09:12:48.685411] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:04.776 [2024-10-15 09:12:48.685432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:04.776 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.776 09:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61550 00:13:04.776 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 61550 ']' 00:13:04.776 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 61550 00:13:04.776 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:13:04.776 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:04.776 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61550 00:13:05.035 killing process with pid 61550 00:13:05.035 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:05.035 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:05.035 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61550' 00:13:05.035 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 61550 00:13:05.035 [2024-10-15 09:12:48.721596] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:05.035 09:12:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 61550 00:13:05.035 [2024-10-15 09:12:48.855227] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:06.409 09:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ecC8oRJiuh 00:13:06.409 09:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:06.409 09:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:06.409 09:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:13:06.409 09:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:06.409 09:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:06.409 09:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:06.409 09:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:13:06.409 00:13:06.409 real 0m4.659s 00:13:06.409 user 0m5.748s 00:13:06.409 sys 0m0.625s 00:13:06.409 09:12:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:06.409 09:12:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.409 ************************************ 00:13:06.409 END TEST raid_read_error_test 00:13:06.409 ************************************ 00:13:06.409 09:12:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:13:06.409 09:12:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:06.409 09:12:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:06.409 09:12:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:06.409 ************************************ 00:13:06.409 START TEST raid_write_error_test 00:13:06.409 ************************************ 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mOLm4JxWkY 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61696 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61696 00:13:06.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 61696 ']' 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:06.409 09:12:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.409 [2024-10-15 09:12:50.220492] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:13:06.409 [2024-10-15 09:12:50.220965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61696 ] 00:13:06.667 [2024-10-15 09:12:50.396378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.667 [2024-10-15 09:12:50.559365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.953 [2024-10-15 09:12:50.793865] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.953 [2024-10-15 09:12:50.793976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.517 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:07.517 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:07.517 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:07.517 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:07.517 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.517 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.517 BaseBdev1_malloc 00:13:07.517 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.517 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:07.517 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.517 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.517 true 00:13:07.517 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.517 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:07.517 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.517 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.517 [2024-10-15 09:12:51.276683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:07.517 [2024-10-15 09:12:51.276996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.517 [2024-10-15 09:12:51.277226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:07.517 [2024-10-15 09:12:51.277413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.517 [2024-10-15 09:12:51.281631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.517 [2024-10-15 09:12:51.281700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:07.517 BaseBdev1 00:13:07.517 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.517 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.518 BaseBdev2_malloc 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.518 true 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.518 [2024-10-15 09:12:51.364532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:07.518 [2024-10-15 09:12:51.364649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.518 [2024-10-15 09:12:51.364702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:07.518 [2024-10-15 09:12:51.364740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.518 [2024-10-15 09:12:51.368682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.518 [2024-10-15 09:12:51.368755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:07.518 BaseBdev2 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.518 [2024-10-15 09:12:51.373100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.518 [2024-10-15 09:12:51.375975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:07.518 [2024-10-15 09:12:51.376267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:07.518 [2024-10-15 09:12:51.376300] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:07.518 [2024-10-15 09:12:51.376616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:07.518 [2024-10-15 09:12:51.376854] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:07.518 [2024-10-15 09:12:51.376872] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:07.518 [2024-10-15 09:12:51.377142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.518 "name": "raid_bdev1", 00:13:07.518 "uuid": "bc9ffd22-cba8-470e-be2f-eb3cccbc0ea0", 00:13:07.518 "strip_size_kb": 64, 00:13:07.518 "state": "online", 00:13:07.518 "raid_level": "raid0", 00:13:07.518 "superblock": true, 00:13:07.518 "num_base_bdevs": 2, 00:13:07.518 "num_base_bdevs_discovered": 2, 00:13:07.518 "num_base_bdevs_operational": 2, 00:13:07.518 "base_bdevs_list": [ 00:13:07.518 { 00:13:07.518 "name": "BaseBdev1", 00:13:07.518 "uuid": "4589c227-0142-5df5-899a-12b00fd3674c", 00:13:07.518 "is_configured": true, 00:13:07.518 "data_offset": 2048, 00:13:07.518 "data_size": 63488 00:13:07.518 }, 00:13:07.518 { 00:13:07.518 "name": "BaseBdev2", 00:13:07.518 "uuid": "ff0fdfd8-78eb-5eba-83fa-b05e00019da2", 00:13:07.518 "is_configured": true, 00:13:07.518 "data_offset": 2048, 00:13:07.518 "data_size": 63488 00:13:07.518 } 00:13:07.518 ] 00:13:07.518 }' 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.518 09:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.083 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:08.083 09:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:08.342 [2024-10-15 09:12:52.042782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.277 "name": "raid_bdev1", 00:13:09.277 "uuid": "bc9ffd22-cba8-470e-be2f-eb3cccbc0ea0", 00:13:09.277 "strip_size_kb": 64, 00:13:09.277 "state": "online", 00:13:09.277 "raid_level": "raid0", 00:13:09.277 "superblock": true, 00:13:09.277 "num_base_bdevs": 2, 00:13:09.277 "num_base_bdevs_discovered": 2, 00:13:09.277 "num_base_bdevs_operational": 2, 00:13:09.277 "base_bdevs_list": [ 00:13:09.277 { 00:13:09.277 "name": "BaseBdev1", 00:13:09.277 "uuid": "4589c227-0142-5df5-899a-12b00fd3674c", 00:13:09.277 "is_configured": true, 00:13:09.277 "data_offset": 2048, 00:13:09.277 "data_size": 63488 00:13:09.277 }, 00:13:09.277 { 00:13:09.277 "name": "BaseBdev2", 00:13:09.277 "uuid": "ff0fdfd8-78eb-5eba-83fa-b05e00019da2", 00:13:09.277 "is_configured": true, 00:13:09.277 "data_offset": 2048, 00:13:09.277 "data_size": 63488 00:13:09.277 } 00:13:09.277 ] 00:13:09.277 }' 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.277 09:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.535 09:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:09.535 09:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.535 09:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.535 [2024-10-15 09:12:53.396252] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:09.535 [2024-10-15 09:12:53.396448] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.535 [2024-10-15 09:12:53.399967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.535 { 00:13:09.535 "results": [ 00:13:09.535 { 00:13:09.535 "job": "raid_bdev1", 00:13:09.535 "core_mask": "0x1", 00:13:09.535 "workload": "randrw", 00:13:09.535 "percentage": 50, 00:13:09.535 "status": "finished", 00:13:09.535 "queue_depth": 1, 00:13:09.535 "io_size": 131072, 00:13:09.535 "runtime": 1.350962, 00:13:09.535 "iops": 10121.676257363271, 00:13:09.535 "mibps": 1265.209532170409, 00:13:09.535 "io_failed": 1, 00:13:09.535 "io_timeout": 0, 00:13:09.535 "avg_latency_us": 139.2362514209739, 00:13:09.535 "min_latency_us": 43.985454545454544, 00:13:09.535 "max_latency_us": 1884.16 00:13:09.535 } 00:13:09.535 ], 00:13:09.535 "core_count": 1 00:13:09.535 } 00:13:09.535 [2024-10-15 09:12:53.400180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.535 [2024-10-15 09:12:53.400243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.535 [2024-10-15 09:12:53.400265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:09.535 09:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.535 09:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61696 00:13:09.535 09:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 61696 ']' 00:13:09.535 09:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 61696 00:13:09.535 09:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:13:09.535 09:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:09.535 09:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61696 00:13:09.535 09:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:09.535 killing process with pid 61696 00:13:09.535 09:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:09.535 09:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61696' 00:13:09.535 09:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 61696 00:13:09.535 09:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 61696 00:13:09.535 [2024-10-15 09:12:53.434803] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:09.793 [2024-10-15 09:12:53.566658] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:11.177 09:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mOLm4JxWkY 00:13:11.177 09:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:11.177 09:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:11.177 09:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:13:11.177 09:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:11.177 09:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:11.177 09:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:11.177 09:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:13:11.177 00:13:11.177 real 0m4.658s 00:13:11.177 user 0m5.738s 00:13:11.177 sys 0m0.606s 00:13:11.177 09:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.177 09:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.177 ************************************ 00:13:11.177 END TEST raid_write_error_test 00:13:11.177 ************************************ 00:13:11.177 09:12:54 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:11.177 09:12:54 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:13:11.177 09:12:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:11.178 09:12:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.178 09:12:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:11.178 ************************************ 00:13:11.178 START TEST raid_state_function_test 00:13:11.178 ************************************ 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61845 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61845' 00:13:11.178 Process raid pid: 61845 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61845 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 61845 ']' 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:11.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:11.178 09:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.178 [2024-10-15 09:12:54.904090] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:13:11.178 [2024-10-15 09:12:54.904283] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.178 [2024-10-15 09:12:55.075028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.436 [2024-10-15 09:12:55.223154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.694 [2024-10-15 09:12:55.449687] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.694 [2024-10-15 09:12:55.449758] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.261 [2024-10-15 09:12:55.913427] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:12.261 [2024-10-15 09:12:55.913540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:12.261 [2024-10-15 09:12:55.913574] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:12.261 [2024-10-15 09:12:55.913591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.261 09:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.261 "name": "Existed_Raid", 00:13:12.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.261 "strip_size_kb": 64, 00:13:12.261 "state": "configuring", 00:13:12.261 "raid_level": "concat", 00:13:12.261 "superblock": false, 00:13:12.261 "num_base_bdevs": 2, 00:13:12.261 "num_base_bdevs_discovered": 0, 00:13:12.261 "num_base_bdevs_operational": 2, 00:13:12.261 "base_bdevs_list": [ 00:13:12.261 { 00:13:12.261 "name": "BaseBdev1", 00:13:12.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.261 "is_configured": false, 00:13:12.261 "data_offset": 0, 00:13:12.261 "data_size": 0 00:13:12.261 }, 00:13:12.261 { 00:13:12.261 "name": "BaseBdev2", 00:13:12.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.261 "is_configured": false, 00:13:12.261 "data_offset": 0, 00:13:12.261 "data_size": 0 00:13:12.262 } 00:13:12.262 ] 00:13:12.262 }' 00:13:12.262 09:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.262 09:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.520 09:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:12.520 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.520 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.520 [2024-10-15 09:12:56.385525] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:12.520 [2024-10-15 09:12:56.385605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:12.521 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.521 09:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:12.521 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.521 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.521 [2024-10-15 09:12:56.397528] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:12.521 [2024-10-15 09:12:56.397584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:12.521 [2024-10-15 09:12:56.397600] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:12.521 [2024-10-15 09:12:56.397620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:12.521 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.521 09:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:12.521 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.521 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.521 [2024-10-15 09:12:56.445105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:12.521 BaseBdev1 00:13:12.521 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.521 09:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:12.521 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:12.521 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:12.521 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:12.521 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:12.521 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:12.521 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:12.521 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.521 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.779 [ 00:13:12.779 { 00:13:12.779 "name": "BaseBdev1", 00:13:12.779 "aliases": [ 00:13:12.779 "3b63bf4b-c601-482e-9ef7-3fc20f34a7f0" 00:13:12.779 ], 00:13:12.779 "product_name": "Malloc disk", 00:13:12.779 "block_size": 512, 00:13:12.779 "num_blocks": 65536, 00:13:12.779 "uuid": "3b63bf4b-c601-482e-9ef7-3fc20f34a7f0", 00:13:12.779 "assigned_rate_limits": { 00:13:12.779 "rw_ios_per_sec": 0, 00:13:12.779 "rw_mbytes_per_sec": 0, 00:13:12.779 "r_mbytes_per_sec": 0, 00:13:12.779 "w_mbytes_per_sec": 0 00:13:12.779 }, 00:13:12.779 "claimed": true, 00:13:12.779 "claim_type": "exclusive_write", 00:13:12.779 "zoned": false, 00:13:12.779 "supported_io_types": { 00:13:12.779 "read": true, 00:13:12.779 "write": true, 00:13:12.779 "unmap": true, 00:13:12.779 "flush": true, 00:13:12.779 "reset": true, 00:13:12.779 "nvme_admin": false, 00:13:12.779 "nvme_io": false, 00:13:12.779 "nvme_io_md": false, 00:13:12.779 "write_zeroes": true, 00:13:12.779 "zcopy": true, 00:13:12.779 "get_zone_info": false, 00:13:12.779 "zone_management": false, 00:13:12.779 "zone_append": false, 00:13:12.779 "compare": false, 00:13:12.779 "compare_and_write": false, 00:13:12.779 "abort": true, 00:13:12.779 "seek_hole": false, 00:13:12.779 "seek_data": false, 00:13:12.779 "copy": true, 00:13:12.779 "nvme_iov_md": false 00:13:12.779 }, 00:13:12.779 "memory_domains": [ 00:13:12.779 { 00:13:12.779 "dma_device_id": "system", 00:13:12.779 "dma_device_type": 1 00:13:12.779 }, 00:13:12.779 { 00:13:12.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.779 "dma_device_type": 2 00:13:12.779 } 00:13:12.779 ], 00:13:12.779 "driver_specific": {} 00:13:12.779 } 00:13:12.779 ] 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.779 "name": "Existed_Raid", 00:13:12.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.779 "strip_size_kb": 64, 00:13:12.779 "state": "configuring", 00:13:12.779 "raid_level": "concat", 00:13:12.779 "superblock": false, 00:13:12.779 "num_base_bdevs": 2, 00:13:12.779 "num_base_bdevs_discovered": 1, 00:13:12.779 "num_base_bdevs_operational": 2, 00:13:12.779 "base_bdevs_list": [ 00:13:12.779 { 00:13:12.779 "name": "BaseBdev1", 00:13:12.779 "uuid": "3b63bf4b-c601-482e-9ef7-3fc20f34a7f0", 00:13:12.779 "is_configured": true, 00:13:12.779 "data_offset": 0, 00:13:12.779 "data_size": 65536 00:13:12.779 }, 00:13:12.779 { 00:13:12.779 "name": "BaseBdev2", 00:13:12.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.779 "is_configured": false, 00:13:12.779 "data_offset": 0, 00:13:12.779 "data_size": 0 00:13:12.779 } 00:13:12.779 ] 00:13:12.779 }' 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.779 09:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.380 [2024-10-15 09:12:57.029352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:13.380 [2024-10-15 09:12:57.029430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.380 [2024-10-15 09:12:57.037390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.380 [2024-10-15 09:12:57.039952] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.380 [2024-10-15 09:12:57.040009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.380 "name": "Existed_Raid", 00:13:13.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.380 "strip_size_kb": 64, 00:13:13.380 "state": "configuring", 00:13:13.380 "raid_level": "concat", 00:13:13.380 "superblock": false, 00:13:13.380 "num_base_bdevs": 2, 00:13:13.380 "num_base_bdevs_discovered": 1, 00:13:13.380 "num_base_bdevs_operational": 2, 00:13:13.380 "base_bdevs_list": [ 00:13:13.380 { 00:13:13.380 "name": "BaseBdev1", 00:13:13.380 "uuid": "3b63bf4b-c601-482e-9ef7-3fc20f34a7f0", 00:13:13.380 "is_configured": true, 00:13:13.380 "data_offset": 0, 00:13:13.380 "data_size": 65536 00:13:13.380 }, 00:13:13.380 { 00:13:13.380 "name": "BaseBdev2", 00:13:13.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.380 "is_configured": false, 00:13:13.380 "data_offset": 0, 00:13:13.380 "data_size": 0 00:13:13.380 } 00:13:13.380 ] 00:13:13.380 }' 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.380 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.639 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:13.639 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.639 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.639 [2024-10-15 09:12:57.539333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:13.639 [2024-10-15 09:12:57.539423] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:13.639 [2024-10-15 09:12:57.539436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:13.639 [2024-10-15 09:12:57.539797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:13.639 [2024-10-15 09:12:57.540017] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:13.639 [2024-10-15 09:12:57.540052] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:13.639 [2024-10-15 09:12:57.540412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.639 BaseBdev2 00:13:13.639 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.639 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:13.639 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:13.639 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:13.639 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:13.639 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:13.639 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:13.639 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:13.639 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.639 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.639 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.639 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:13.639 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.639 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.639 [ 00:13:13.639 { 00:13:13.639 "name": "BaseBdev2", 00:13:13.639 "aliases": [ 00:13:13.639 "efc8b879-a17e-4de8-946e-cdbf04eee549" 00:13:13.639 ], 00:13:13.639 "product_name": "Malloc disk", 00:13:13.639 "block_size": 512, 00:13:13.639 "num_blocks": 65536, 00:13:13.639 "uuid": "efc8b879-a17e-4de8-946e-cdbf04eee549", 00:13:13.639 "assigned_rate_limits": { 00:13:13.639 "rw_ios_per_sec": 0, 00:13:13.639 "rw_mbytes_per_sec": 0, 00:13:13.639 "r_mbytes_per_sec": 0, 00:13:13.639 "w_mbytes_per_sec": 0 00:13:13.639 }, 00:13:13.639 "claimed": true, 00:13:13.639 "claim_type": "exclusive_write", 00:13:13.639 "zoned": false, 00:13:13.639 "supported_io_types": { 00:13:13.639 "read": true, 00:13:13.639 "write": true, 00:13:13.639 "unmap": true, 00:13:13.639 "flush": true, 00:13:13.639 "reset": true, 00:13:13.639 "nvme_admin": false, 00:13:13.639 "nvme_io": false, 00:13:13.639 "nvme_io_md": false, 00:13:13.639 "write_zeroes": true, 00:13:13.639 "zcopy": true, 00:13:13.639 "get_zone_info": false, 00:13:13.639 "zone_management": false, 00:13:13.639 "zone_append": false, 00:13:13.639 "compare": false, 00:13:13.639 "compare_and_write": false, 00:13:13.639 "abort": true, 00:13:13.898 "seek_hole": false, 00:13:13.898 "seek_data": false, 00:13:13.898 "copy": true, 00:13:13.898 "nvme_iov_md": false 00:13:13.898 }, 00:13:13.898 "memory_domains": [ 00:13:13.898 { 00:13:13.898 "dma_device_id": "system", 00:13:13.898 "dma_device_type": 1 00:13:13.898 }, 00:13:13.898 { 00:13:13.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.898 "dma_device_type": 2 00:13:13.898 } 00:13:13.898 ], 00:13:13.898 "driver_specific": {} 00:13:13.898 } 00:13:13.898 ] 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.898 "name": "Existed_Raid", 00:13:13.898 "uuid": "e4ec2c53-34a7-4b82-a1cd-e38884bd2ec7", 00:13:13.898 "strip_size_kb": 64, 00:13:13.898 "state": "online", 00:13:13.898 "raid_level": "concat", 00:13:13.898 "superblock": false, 00:13:13.898 "num_base_bdevs": 2, 00:13:13.898 "num_base_bdevs_discovered": 2, 00:13:13.898 "num_base_bdevs_operational": 2, 00:13:13.898 "base_bdevs_list": [ 00:13:13.898 { 00:13:13.898 "name": "BaseBdev1", 00:13:13.898 "uuid": "3b63bf4b-c601-482e-9ef7-3fc20f34a7f0", 00:13:13.898 "is_configured": true, 00:13:13.898 "data_offset": 0, 00:13:13.898 "data_size": 65536 00:13:13.898 }, 00:13:13.898 { 00:13:13.898 "name": "BaseBdev2", 00:13:13.898 "uuid": "efc8b879-a17e-4de8-946e-cdbf04eee549", 00:13:13.898 "is_configured": true, 00:13:13.898 "data_offset": 0, 00:13:13.898 "data_size": 65536 00:13:13.898 } 00:13:13.898 ] 00:13:13.898 }' 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.898 09:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.158 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:14.158 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:14.158 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:14.158 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:14.158 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:14.158 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:14.158 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:14.158 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:14.158 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.158 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.158 [2024-10-15 09:12:58.079944] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.418 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.418 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:14.418 "name": "Existed_Raid", 00:13:14.418 "aliases": [ 00:13:14.418 "e4ec2c53-34a7-4b82-a1cd-e38884bd2ec7" 00:13:14.418 ], 00:13:14.418 "product_name": "Raid Volume", 00:13:14.418 "block_size": 512, 00:13:14.418 "num_blocks": 131072, 00:13:14.418 "uuid": "e4ec2c53-34a7-4b82-a1cd-e38884bd2ec7", 00:13:14.418 "assigned_rate_limits": { 00:13:14.418 "rw_ios_per_sec": 0, 00:13:14.418 "rw_mbytes_per_sec": 0, 00:13:14.418 "r_mbytes_per_sec": 0, 00:13:14.418 "w_mbytes_per_sec": 0 00:13:14.418 }, 00:13:14.418 "claimed": false, 00:13:14.418 "zoned": false, 00:13:14.418 "supported_io_types": { 00:13:14.418 "read": true, 00:13:14.418 "write": true, 00:13:14.418 "unmap": true, 00:13:14.418 "flush": true, 00:13:14.418 "reset": true, 00:13:14.418 "nvme_admin": false, 00:13:14.418 "nvme_io": false, 00:13:14.418 "nvme_io_md": false, 00:13:14.418 "write_zeroes": true, 00:13:14.418 "zcopy": false, 00:13:14.418 "get_zone_info": false, 00:13:14.418 "zone_management": false, 00:13:14.418 "zone_append": false, 00:13:14.418 "compare": false, 00:13:14.418 "compare_and_write": false, 00:13:14.418 "abort": false, 00:13:14.418 "seek_hole": false, 00:13:14.418 "seek_data": false, 00:13:14.418 "copy": false, 00:13:14.418 "nvme_iov_md": false 00:13:14.418 }, 00:13:14.418 "memory_domains": [ 00:13:14.418 { 00:13:14.418 "dma_device_id": "system", 00:13:14.418 "dma_device_type": 1 00:13:14.418 }, 00:13:14.418 { 00:13:14.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.418 "dma_device_type": 2 00:13:14.418 }, 00:13:14.418 { 00:13:14.418 "dma_device_id": "system", 00:13:14.418 "dma_device_type": 1 00:13:14.418 }, 00:13:14.418 { 00:13:14.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.418 "dma_device_type": 2 00:13:14.418 } 00:13:14.418 ], 00:13:14.418 "driver_specific": { 00:13:14.418 "raid": { 00:13:14.418 "uuid": "e4ec2c53-34a7-4b82-a1cd-e38884bd2ec7", 00:13:14.418 "strip_size_kb": 64, 00:13:14.418 "state": "online", 00:13:14.418 "raid_level": "concat", 00:13:14.418 "superblock": false, 00:13:14.418 "num_base_bdevs": 2, 00:13:14.418 "num_base_bdevs_discovered": 2, 00:13:14.418 "num_base_bdevs_operational": 2, 00:13:14.418 "base_bdevs_list": [ 00:13:14.418 { 00:13:14.418 "name": "BaseBdev1", 00:13:14.418 "uuid": "3b63bf4b-c601-482e-9ef7-3fc20f34a7f0", 00:13:14.418 "is_configured": true, 00:13:14.418 "data_offset": 0, 00:13:14.418 "data_size": 65536 00:13:14.418 }, 00:13:14.418 { 00:13:14.418 "name": "BaseBdev2", 00:13:14.419 "uuid": "efc8b879-a17e-4de8-946e-cdbf04eee549", 00:13:14.419 "is_configured": true, 00:13:14.419 "data_offset": 0, 00:13:14.419 "data_size": 65536 00:13:14.419 } 00:13:14.419 ] 00:13:14.419 } 00:13:14.419 } 00:13:14.419 }' 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:14.419 BaseBdev2' 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.419 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.419 [2024-10-15 09:12:58.319760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:14.419 [2024-10-15 09:12:58.319806] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:14.419 [2024-10-15 09:12:58.319918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.678 "name": "Existed_Raid", 00:13:14.678 "uuid": "e4ec2c53-34a7-4b82-a1cd-e38884bd2ec7", 00:13:14.678 "strip_size_kb": 64, 00:13:14.678 "state": "offline", 00:13:14.678 "raid_level": "concat", 00:13:14.678 "superblock": false, 00:13:14.678 "num_base_bdevs": 2, 00:13:14.678 "num_base_bdevs_discovered": 1, 00:13:14.678 "num_base_bdevs_operational": 1, 00:13:14.678 "base_bdevs_list": [ 00:13:14.678 { 00:13:14.678 "name": null, 00:13:14.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.678 "is_configured": false, 00:13:14.678 "data_offset": 0, 00:13:14.678 "data_size": 65536 00:13:14.678 }, 00:13:14.678 { 00:13:14.678 "name": "BaseBdev2", 00:13:14.678 "uuid": "efc8b879-a17e-4de8-946e-cdbf04eee549", 00:13:14.678 "is_configured": true, 00:13:14.678 "data_offset": 0, 00:13:14.678 "data_size": 65536 00:13:14.678 } 00:13:14.678 ] 00:13:14.678 }' 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.678 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.245 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:15.245 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:15.245 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:15.245 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.245 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.245 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.245 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.245 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:15.245 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:15.245 09:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:15.245 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.245 09:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.245 [2024-10-15 09:12:58.924098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:15.245 [2024-10-15 09:12:58.924359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61845 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 61845 ']' 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 61845 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61845 00:13:15.245 killing process with pid 61845 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61845' 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 61845 00:13:15.245 [2024-10-15 09:12:59.114140] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:15.245 09:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 61845 00:13:15.245 [2024-10-15 09:12:59.129649] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:16.621 00:13:16.621 real 0m5.456s 00:13:16.621 user 0m8.079s 00:13:16.621 sys 0m0.823s 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.621 ************************************ 00:13:16.621 END TEST raid_state_function_test 00:13:16.621 ************************************ 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.621 09:13:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:13:16.621 09:13:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:16.621 09:13:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.621 09:13:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:16.621 ************************************ 00:13:16.621 START TEST raid_state_function_test_sb 00:13:16.621 ************************************ 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62098 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:16.621 Process raid pid: 62098 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62098' 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62098 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 62098 ']' 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:16.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:16.621 09:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.621 [2024-10-15 09:13:00.438616] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:13:16.621 [2024-10-15 09:13:00.438822] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.880 [2024-10-15 09:13:00.614224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.880 [2024-10-15 09:13:00.762916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.139 [2024-10-15 09:13:00.995258] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.139 [2024-10-15 09:13:00.995328] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.706 [2024-10-15 09:13:01.528972] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:17.706 [2024-10-15 09:13:01.529047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:17.706 [2024-10-15 09:13:01.529065] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:17.706 [2024-10-15 09:13:01.529083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.706 "name": "Existed_Raid", 00:13:17.706 "uuid": "f3ac09a8-8d4c-49cf-8e1f-a09d28360a55", 00:13:17.706 "strip_size_kb": 64, 00:13:17.706 "state": "configuring", 00:13:17.706 "raid_level": "concat", 00:13:17.706 "superblock": true, 00:13:17.706 "num_base_bdevs": 2, 00:13:17.706 "num_base_bdevs_discovered": 0, 00:13:17.706 "num_base_bdevs_operational": 2, 00:13:17.706 "base_bdevs_list": [ 00:13:17.706 { 00:13:17.706 "name": "BaseBdev1", 00:13:17.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.706 "is_configured": false, 00:13:17.706 "data_offset": 0, 00:13:17.706 "data_size": 0 00:13:17.706 }, 00:13:17.706 { 00:13:17.706 "name": "BaseBdev2", 00:13:17.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.706 "is_configured": false, 00:13:17.706 "data_offset": 0, 00:13:17.706 "data_size": 0 00:13:17.706 } 00:13:17.706 ] 00:13:17.706 }' 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.706 09:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.273 [2024-10-15 09:13:02.073033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:18.273 [2024-10-15 09:13:02.073086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.273 [2024-10-15 09:13:02.081103] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:18.273 [2024-10-15 09:13:02.081205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:18.273 [2024-10-15 09:13:02.081225] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:18.273 [2024-10-15 09:13:02.081248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.273 [2024-10-15 09:13:02.130829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:18.273 BaseBdev1 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.273 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.273 [ 00:13:18.273 { 00:13:18.273 "name": "BaseBdev1", 00:13:18.273 "aliases": [ 00:13:18.273 "ecd8be79-5535-4795-8198-96c5645a350e" 00:13:18.273 ], 00:13:18.273 "product_name": "Malloc disk", 00:13:18.273 "block_size": 512, 00:13:18.273 "num_blocks": 65536, 00:13:18.273 "uuid": "ecd8be79-5535-4795-8198-96c5645a350e", 00:13:18.273 "assigned_rate_limits": { 00:13:18.273 "rw_ios_per_sec": 0, 00:13:18.273 "rw_mbytes_per_sec": 0, 00:13:18.273 "r_mbytes_per_sec": 0, 00:13:18.273 "w_mbytes_per_sec": 0 00:13:18.273 }, 00:13:18.273 "claimed": true, 00:13:18.273 "claim_type": "exclusive_write", 00:13:18.273 "zoned": false, 00:13:18.273 "supported_io_types": { 00:13:18.273 "read": true, 00:13:18.273 "write": true, 00:13:18.273 "unmap": true, 00:13:18.273 "flush": true, 00:13:18.273 "reset": true, 00:13:18.273 "nvme_admin": false, 00:13:18.273 "nvme_io": false, 00:13:18.273 "nvme_io_md": false, 00:13:18.273 "write_zeroes": true, 00:13:18.273 "zcopy": true, 00:13:18.273 "get_zone_info": false, 00:13:18.273 "zone_management": false, 00:13:18.273 "zone_append": false, 00:13:18.273 "compare": false, 00:13:18.273 "compare_and_write": false, 00:13:18.273 "abort": true, 00:13:18.273 "seek_hole": false, 00:13:18.273 "seek_data": false, 00:13:18.273 "copy": true, 00:13:18.273 "nvme_iov_md": false 00:13:18.273 }, 00:13:18.273 "memory_domains": [ 00:13:18.273 { 00:13:18.273 "dma_device_id": "system", 00:13:18.273 "dma_device_type": 1 00:13:18.273 }, 00:13:18.273 { 00:13:18.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.273 "dma_device_type": 2 00:13:18.273 } 00:13:18.273 ], 00:13:18.273 "driver_specific": {} 00:13:18.273 } 00:13:18.273 ] 00:13:18.274 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.274 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:18.274 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:18.274 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.274 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.274 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:18.274 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.274 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:18.274 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.274 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.274 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.274 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.274 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.274 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.274 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.274 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.274 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.562 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.562 "name": "Existed_Raid", 00:13:18.562 "uuid": "69426f23-c646-40b2-9575-b3cf3ade9e53", 00:13:18.562 "strip_size_kb": 64, 00:13:18.562 "state": "configuring", 00:13:18.562 "raid_level": "concat", 00:13:18.562 "superblock": true, 00:13:18.562 "num_base_bdevs": 2, 00:13:18.562 "num_base_bdevs_discovered": 1, 00:13:18.562 "num_base_bdevs_operational": 2, 00:13:18.562 "base_bdevs_list": [ 00:13:18.562 { 00:13:18.562 "name": "BaseBdev1", 00:13:18.562 "uuid": "ecd8be79-5535-4795-8198-96c5645a350e", 00:13:18.562 "is_configured": true, 00:13:18.562 "data_offset": 2048, 00:13:18.562 "data_size": 63488 00:13:18.562 }, 00:13:18.562 { 00:13:18.562 "name": "BaseBdev2", 00:13:18.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.562 "is_configured": false, 00:13:18.562 "data_offset": 0, 00:13:18.562 "data_size": 0 00:13:18.562 } 00:13:18.562 ] 00:13:18.562 }' 00:13:18.562 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.562 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.842 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.843 [2024-10-15 09:13:02.679050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:18.843 [2024-10-15 09:13:02.679162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.843 [2024-10-15 09:13:02.687173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:18.843 [2024-10-15 09:13:02.689883] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:18.843 [2024-10-15 09:13:02.689950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.843 "name": "Existed_Raid", 00:13:18.843 "uuid": "0e2c2fab-f92e-4934-8180-2916e56d9789", 00:13:18.843 "strip_size_kb": 64, 00:13:18.843 "state": "configuring", 00:13:18.843 "raid_level": "concat", 00:13:18.843 "superblock": true, 00:13:18.843 "num_base_bdevs": 2, 00:13:18.843 "num_base_bdevs_discovered": 1, 00:13:18.843 "num_base_bdevs_operational": 2, 00:13:18.843 "base_bdevs_list": [ 00:13:18.843 { 00:13:18.843 "name": "BaseBdev1", 00:13:18.843 "uuid": "ecd8be79-5535-4795-8198-96c5645a350e", 00:13:18.843 "is_configured": true, 00:13:18.843 "data_offset": 2048, 00:13:18.843 "data_size": 63488 00:13:18.843 }, 00:13:18.843 { 00:13:18.843 "name": "BaseBdev2", 00:13:18.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.843 "is_configured": false, 00:13:18.843 "data_offset": 0, 00:13:18.843 "data_size": 0 00:13:18.843 } 00:13:18.843 ] 00:13:18.843 }' 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.843 09:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.409 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.410 [2024-10-15 09:13:03.245524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.410 [2024-10-15 09:13:03.246189] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:19.410 [2024-10-15 09:13:03.246216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:19.410 BaseBdev2 00:13:19.410 [2024-10-15 09:13:03.246720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:19.410 [2024-10-15 09:13:03.246924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:19.410 [2024-10-15 09:13:03.246954] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:19.410 [2024-10-15 09:13:03.247201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.410 [ 00:13:19.410 { 00:13:19.410 "name": "BaseBdev2", 00:13:19.410 "aliases": [ 00:13:19.410 "5666b214-4fd6-44dd-bc1c-963f19167be4" 00:13:19.410 ], 00:13:19.410 "product_name": "Malloc disk", 00:13:19.410 "block_size": 512, 00:13:19.410 "num_blocks": 65536, 00:13:19.410 "uuid": "5666b214-4fd6-44dd-bc1c-963f19167be4", 00:13:19.410 "assigned_rate_limits": { 00:13:19.410 "rw_ios_per_sec": 0, 00:13:19.410 "rw_mbytes_per_sec": 0, 00:13:19.410 "r_mbytes_per_sec": 0, 00:13:19.410 "w_mbytes_per_sec": 0 00:13:19.410 }, 00:13:19.410 "claimed": true, 00:13:19.410 "claim_type": "exclusive_write", 00:13:19.410 "zoned": false, 00:13:19.410 "supported_io_types": { 00:13:19.410 "read": true, 00:13:19.410 "write": true, 00:13:19.410 "unmap": true, 00:13:19.410 "flush": true, 00:13:19.410 "reset": true, 00:13:19.410 "nvme_admin": false, 00:13:19.410 "nvme_io": false, 00:13:19.410 "nvme_io_md": false, 00:13:19.410 "write_zeroes": true, 00:13:19.410 "zcopy": true, 00:13:19.410 "get_zone_info": false, 00:13:19.410 "zone_management": false, 00:13:19.410 "zone_append": false, 00:13:19.410 "compare": false, 00:13:19.410 "compare_and_write": false, 00:13:19.410 "abort": true, 00:13:19.410 "seek_hole": false, 00:13:19.410 "seek_data": false, 00:13:19.410 "copy": true, 00:13:19.410 "nvme_iov_md": false 00:13:19.410 }, 00:13:19.410 "memory_domains": [ 00:13:19.410 { 00:13:19.410 "dma_device_id": "system", 00:13:19.410 "dma_device_type": 1 00:13:19.410 }, 00:13:19.410 { 00:13:19.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.410 "dma_device_type": 2 00:13:19.410 } 00:13:19.410 ], 00:13:19.410 "driver_specific": {} 00:13:19.410 } 00:13:19.410 ] 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.410 "name": "Existed_Raid", 00:13:19.410 "uuid": "0e2c2fab-f92e-4934-8180-2916e56d9789", 00:13:19.410 "strip_size_kb": 64, 00:13:19.410 "state": "online", 00:13:19.410 "raid_level": "concat", 00:13:19.410 "superblock": true, 00:13:19.410 "num_base_bdevs": 2, 00:13:19.410 "num_base_bdevs_discovered": 2, 00:13:19.410 "num_base_bdevs_operational": 2, 00:13:19.410 "base_bdevs_list": [ 00:13:19.410 { 00:13:19.410 "name": "BaseBdev1", 00:13:19.410 "uuid": "ecd8be79-5535-4795-8198-96c5645a350e", 00:13:19.410 "is_configured": true, 00:13:19.410 "data_offset": 2048, 00:13:19.410 "data_size": 63488 00:13:19.410 }, 00:13:19.410 { 00:13:19.410 "name": "BaseBdev2", 00:13:19.410 "uuid": "5666b214-4fd6-44dd-bc1c-963f19167be4", 00:13:19.410 "is_configured": true, 00:13:19.410 "data_offset": 2048, 00:13:19.410 "data_size": 63488 00:13:19.410 } 00:13:19.410 ] 00:13:19.410 }' 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.410 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.978 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:19.978 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:19.978 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:19.978 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:19.978 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:19.978 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:19.978 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:19.978 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:19.978 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.978 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.978 [2024-10-15 09:13:03.806161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:19.978 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.978 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:19.978 "name": "Existed_Raid", 00:13:19.978 "aliases": [ 00:13:19.978 "0e2c2fab-f92e-4934-8180-2916e56d9789" 00:13:19.978 ], 00:13:19.978 "product_name": "Raid Volume", 00:13:19.978 "block_size": 512, 00:13:19.978 "num_blocks": 126976, 00:13:19.978 "uuid": "0e2c2fab-f92e-4934-8180-2916e56d9789", 00:13:19.978 "assigned_rate_limits": { 00:13:19.978 "rw_ios_per_sec": 0, 00:13:19.978 "rw_mbytes_per_sec": 0, 00:13:19.978 "r_mbytes_per_sec": 0, 00:13:19.978 "w_mbytes_per_sec": 0 00:13:19.978 }, 00:13:19.978 "claimed": false, 00:13:19.978 "zoned": false, 00:13:19.978 "supported_io_types": { 00:13:19.978 "read": true, 00:13:19.978 "write": true, 00:13:19.978 "unmap": true, 00:13:19.978 "flush": true, 00:13:19.978 "reset": true, 00:13:19.978 "nvme_admin": false, 00:13:19.978 "nvme_io": false, 00:13:19.978 "nvme_io_md": false, 00:13:19.978 "write_zeroes": true, 00:13:19.978 "zcopy": false, 00:13:19.978 "get_zone_info": false, 00:13:19.978 "zone_management": false, 00:13:19.978 "zone_append": false, 00:13:19.978 "compare": false, 00:13:19.978 "compare_and_write": false, 00:13:19.978 "abort": false, 00:13:19.978 "seek_hole": false, 00:13:19.978 "seek_data": false, 00:13:19.978 "copy": false, 00:13:19.978 "nvme_iov_md": false 00:13:19.978 }, 00:13:19.978 "memory_domains": [ 00:13:19.978 { 00:13:19.978 "dma_device_id": "system", 00:13:19.978 "dma_device_type": 1 00:13:19.978 }, 00:13:19.978 { 00:13:19.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.978 "dma_device_type": 2 00:13:19.978 }, 00:13:19.978 { 00:13:19.978 "dma_device_id": "system", 00:13:19.978 "dma_device_type": 1 00:13:19.978 }, 00:13:19.978 { 00:13:19.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.978 "dma_device_type": 2 00:13:19.978 } 00:13:19.978 ], 00:13:19.978 "driver_specific": { 00:13:19.978 "raid": { 00:13:19.978 "uuid": "0e2c2fab-f92e-4934-8180-2916e56d9789", 00:13:19.978 "strip_size_kb": 64, 00:13:19.978 "state": "online", 00:13:19.978 "raid_level": "concat", 00:13:19.978 "superblock": true, 00:13:19.978 "num_base_bdevs": 2, 00:13:19.978 "num_base_bdevs_discovered": 2, 00:13:19.978 "num_base_bdevs_operational": 2, 00:13:19.978 "base_bdevs_list": [ 00:13:19.978 { 00:13:19.978 "name": "BaseBdev1", 00:13:19.978 "uuid": "ecd8be79-5535-4795-8198-96c5645a350e", 00:13:19.978 "is_configured": true, 00:13:19.978 "data_offset": 2048, 00:13:19.978 "data_size": 63488 00:13:19.978 }, 00:13:19.978 { 00:13:19.978 "name": "BaseBdev2", 00:13:19.978 "uuid": "5666b214-4fd6-44dd-bc1c-963f19167be4", 00:13:19.978 "is_configured": true, 00:13:19.978 "data_offset": 2048, 00:13:19.978 "data_size": 63488 00:13:19.978 } 00:13:19.978 ] 00:13:19.978 } 00:13:19.978 } 00:13:19.978 }' 00:13:19.978 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:19.978 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:19.978 BaseBdev2' 00:13:19.978 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.237 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:20.237 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.237 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:20.237 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.237 09:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.237 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.237 09:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.237 [2024-10-15 09:13:04.061945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:20.237 [2024-10-15 09:13:04.062010] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:20.237 [2024-10-15 09:13:04.062092] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.237 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.495 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.495 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.495 "name": "Existed_Raid", 00:13:20.495 "uuid": "0e2c2fab-f92e-4934-8180-2916e56d9789", 00:13:20.495 "strip_size_kb": 64, 00:13:20.495 "state": "offline", 00:13:20.495 "raid_level": "concat", 00:13:20.495 "superblock": true, 00:13:20.495 "num_base_bdevs": 2, 00:13:20.495 "num_base_bdevs_discovered": 1, 00:13:20.495 "num_base_bdevs_operational": 1, 00:13:20.495 "base_bdevs_list": [ 00:13:20.495 { 00:13:20.495 "name": null, 00:13:20.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.495 "is_configured": false, 00:13:20.495 "data_offset": 0, 00:13:20.495 "data_size": 63488 00:13:20.495 }, 00:13:20.495 { 00:13:20.495 "name": "BaseBdev2", 00:13:20.495 "uuid": "5666b214-4fd6-44dd-bc1c-963f19167be4", 00:13:20.495 "is_configured": true, 00:13:20.495 "data_offset": 2048, 00:13:20.495 "data_size": 63488 00:13:20.495 } 00:13:20.495 ] 00:13:20.495 }' 00:13:20.495 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.495 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.062 [2024-10-15 09:13:04.748466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:21.062 [2024-10-15 09:13:04.748731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62098 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 62098 ']' 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 62098 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62098 00:13:21.062 killing process with pid 62098 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62098' 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 62098 00:13:21.062 [2024-10-15 09:13:04.943288] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:21.062 09:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 62098 00:13:21.062 [2024-10-15 09:13:04.958990] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:22.435 09:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:22.435 00:13:22.435 real 0m5.757s 00:13:22.435 user 0m8.623s 00:13:22.435 sys 0m0.895s 00:13:22.435 09:13:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:22.435 ************************************ 00:13:22.435 END TEST raid_state_function_test_sb 00:13:22.435 ************************************ 00:13:22.435 09:13:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.435 09:13:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:13:22.435 09:13:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:22.435 09:13:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:22.435 09:13:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:22.435 ************************************ 00:13:22.435 START TEST raid_superblock_test 00:13:22.435 ************************************ 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:22.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62356 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62356 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 62356 ']' 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:22.435 09:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.435 [2024-10-15 09:13:06.239521] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:13:22.435 [2024-10-15 09:13:06.239726] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62356 ] 00:13:22.693 [2024-10-15 09:13:06.424826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.693 [2024-10-15 09:13:06.579603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.951 [2024-10-15 09:13:06.794219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.951 [2024-10-15 09:13:06.794296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.519 malloc1 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.519 [2024-10-15 09:13:07.293395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:23.519 [2024-10-15 09:13:07.293496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.519 [2024-10-15 09:13:07.293533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:23.519 [2024-10-15 09:13:07.293550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.519 [2024-10-15 09:13:07.296297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.519 [2024-10-15 09:13:07.296342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:23.519 pt1 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.519 malloc2 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.519 [2024-10-15 09:13:07.345113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:23.519 [2024-10-15 09:13:07.345204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.519 [2024-10-15 09:13:07.345238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:23.519 [2024-10-15 09:13:07.345262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.519 [2024-10-15 09:13:07.348060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.519 [2024-10-15 09:13:07.348107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:23.519 pt2 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.519 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.519 [2024-10-15 09:13:07.353214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:23.519 [2024-10-15 09:13:07.355626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:23.519 [2024-10-15 09:13:07.355833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:23.520 [2024-10-15 09:13:07.355853] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:23.520 [2024-10-15 09:13:07.356183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:23.520 [2024-10-15 09:13:07.356383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:23.520 [2024-10-15 09:13:07.356406] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:23.520 [2024-10-15 09:13:07.356593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.520 "name": "raid_bdev1", 00:13:23.520 "uuid": "459bc17d-33d3-4585-918e-e667ce392b61", 00:13:23.520 "strip_size_kb": 64, 00:13:23.520 "state": "online", 00:13:23.520 "raid_level": "concat", 00:13:23.520 "superblock": true, 00:13:23.520 "num_base_bdevs": 2, 00:13:23.520 "num_base_bdevs_discovered": 2, 00:13:23.520 "num_base_bdevs_operational": 2, 00:13:23.520 "base_bdevs_list": [ 00:13:23.520 { 00:13:23.520 "name": "pt1", 00:13:23.520 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:23.520 "is_configured": true, 00:13:23.520 "data_offset": 2048, 00:13:23.520 "data_size": 63488 00:13:23.520 }, 00:13:23.520 { 00:13:23.520 "name": "pt2", 00:13:23.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:23.520 "is_configured": true, 00:13:23.520 "data_offset": 2048, 00:13:23.520 "data_size": 63488 00:13:23.520 } 00:13:23.520 ] 00:13:23.520 }' 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.520 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.086 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:24.086 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:24.086 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:24.086 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:24.086 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:24.086 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:24.086 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:24.086 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:24.086 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.086 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.086 [2024-10-15 09:13:07.869721] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.086 09:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.086 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:24.086 "name": "raid_bdev1", 00:13:24.086 "aliases": [ 00:13:24.086 "459bc17d-33d3-4585-918e-e667ce392b61" 00:13:24.086 ], 00:13:24.086 "product_name": "Raid Volume", 00:13:24.086 "block_size": 512, 00:13:24.086 "num_blocks": 126976, 00:13:24.086 "uuid": "459bc17d-33d3-4585-918e-e667ce392b61", 00:13:24.086 "assigned_rate_limits": { 00:13:24.086 "rw_ios_per_sec": 0, 00:13:24.086 "rw_mbytes_per_sec": 0, 00:13:24.086 "r_mbytes_per_sec": 0, 00:13:24.086 "w_mbytes_per_sec": 0 00:13:24.086 }, 00:13:24.086 "claimed": false, 00:13:24.086 "zoned": false, 00:13:24.086 "supported_io_types": { 00:13:24.086 "read": true, 00:13:24.086 "write": true, 00:13:24.086 "unmap": true, 00:13:24.086 "flush": true, 00:13:24.086 "reset": true, 00:13:24.086 "nvme_admin": false, 00:13:24.086 "nvme_io": false, 00:13:24.086 "nvme_io_md": false, 00:13:24.086 "write_zeroes": true, 00:13:24.086 "zcopy": false, 00:13:24.086 "get_zone_info": false, 00:13:24.086 "zone_management": false, 00:13:24.086 "zone_append": false, 00:13:24.086 "compare": false, 00:13:24.086 "compare_and_write": false, 00:13:24.086 "abort": false, 00:13:24.086 "seek_hole": false, 00:13:24.086 "seek_data": false, 00:13:24.086 "copy": false, 00:13:24.086 "nvme_iov_md": false 00:13:24.086 }, 00:13:24.086 "memory_domains": [ 00:13:24.086 { 00:13:24.086 "dma_device_id": "system", 00:13:24.086 "dma_device_type": 1 00:13:24.086 }, 00:13:24.086 { 00:13:24.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.086 "dma_device_type": 2 00:13:24.086 }, 00:13:24.086 { 00:13:24.086 "dma_device_id": "system", 00:13:24.086 "dma_device_type": 1 00:13:24.086 }, 00:13:24.086 { 00:13:24.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.086 "dma_device_type": 2 00:13:24.086 } 00:13:24.086 ], 00:13:24.086 "driver_specific": { 00:13:24.086 "raid": { 00:13:24.086 "uuid": "459bc17d-33d3-4585-918e-e667ce392b61", 00:13:24.086 "strip_size_kb": 64, 00:13:24.086 "state": "online", 00:13:24.086 "raid_level": "concat", 00:13:24.086 "superblock": true, 00:13:24.086 "num_base_bdevs": 2, 00:13:24.086 "num_base_bdevs_discovered": 2, 00:13:24.086 "num_base_bdevs_operational": 2, 00:13:24.086 "base_bdevs_list": [ 00:13:24.086 { 00:13:24.086 "name": "pt1", 00:13:24.086 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:24.086 "is_configured": true, 00:13:24.086 "data_offset": 2048, 00:13:24.086 "data_size": 63488 00:13:24.086 }, 00:13:24.086 { 00:13:24.086 "name": "pt2", 00:13:24.086 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:24.086 "is_configured": true, 00:13:24.086 "data_offset": 2048, 00:13:24.086 "data_size": 63488 00:13:24.086 } 00:13:24.086 ] 00:13:24.086 } 00:13:24.086 } 00:13:24.086 }' 00:13:24.086 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:24.086 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:24.086 pt2' 00:13:24.086 09:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.086 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:24.086 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.086 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:24.086 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.086 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.086 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.345 [2024-10-15 09:13:08.121729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=459bc17d-33d3-4585-918e-e667ce392b61 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 459bc17d-33d3-4585-918e-e667ce392b61 ']' 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.345 [2024-10-15 09:13:08.177423] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.345 [2024-10-15 09:13:08.177463] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.345 [2024-10-15 09:13:08.177580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.345 [2024-10-15 09:13:08.177645] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.345 [2024-10-15 09:13:08.177666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.345 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.604 [2024-10-15 09:13:08.309540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:24.604 [2024-10-15 09:13:08.312253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:24.604 [2024-10-15 09:13:08.312367] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:24.604 [2024-10-15 09:13:08.312452] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:24.604 [2024-10-15 09:13:08.312481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.604 [2024-10-15 09:13:08.312502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:24.604 request: 00:13:24.604 { 00:13:24.604 "name": "raid_bdev1", 00:13:24.604 "raid_level": "concat", 00:13:24.604 "base_bdevs": [ 00:13:24.604 "malloc1", 00:13:24.604 "malloc2" 00:13:24.604 ], 00:13:24.604 "strip_size_kb": 64, 00:13:24.604 "superblock": false, 00:13:24.604 "method": "bdev_raid_create", 00:13:24.604 "req_id": 1 00:13:24.604 } 00:13:24.604 Got JSON-RPC error response 00:13:24.604 response: 00:13:24.604 { 00:13:24.604 "code": -17, 00:13:24.604 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:24.604 } 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.604 [2024-10-15 09:13:08.373489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:24.604 [2024-10-15 09:13:08.373725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.604 [2024-10-15 09:13:08.373868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:24.604 [2024-10-15 09:13:08.374000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.604 [2024-10-15 09:13:08.377148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.604 [2024-10-15 09:13:08.377310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:24.604 [2024-10-15 09:13:08.377547] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:24.604 [2024-10-15 09:13:08.377773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:24.604 pt1 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.604 "name": "raid_bdev1", 00:13:24.604 "uuid": "459bc17d-33d3-4585-918e-e667ce392b61", 00:13:24.604 "strip_size_kb": 64, 00:13:24.604 "state": "configuring", 00:13:24.604 "raid_level": "concat", 00:13:24.604 "superblock": true, 00:13:24.604 "num_base_bdevs": 2, 00:13:24.604 "num_base_bdevs_discovered": 1, 00:13:24.604 "num_base_bdevs_operational": 2, 00:13:24.604 "base_bdevs_list": [ 00:13:24.604 { 00:13:24.604 "name": "pt1", 00:13:24.604 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:24.604 "is_configured": true, 00:13:24.604 "data_offset": 2048, 00:13:24.604 "data_size": 63488 00:13:24.604 }, 00:13:24.604 { 00:13:24.604 "name": null, 00:13:24.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:24.604 "is_configured": false, 00:13:24.604 "data_offset": 2048, 00:13:24.604 "data_size": 63488 00:13:24.604 } 00:13:24.604 ] 00:13:24.604 }' 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.604 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.170 [2024-10-15 09:13:08.885786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:25.170 [2024-10-15 09:13:08.886084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.170 [2024-10-15 09:13:08.886141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:25.170 [2024-10-15 09:13:08.886164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.170 [2024-10-15 09:13:08.886859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.170 [2024-10-15 09:13:08.886897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:25.170 [2024-10-15 09:13:08.887014] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:25.170 [2024-10-15 09:13:08.887062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:25.170 [2024-10-15 09:13:08.887235] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:25.170 [2024-10-15 09:13:08.887258] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:25.170 [2024-10-15 09:13:08.887573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:25.170 [2024-10-15 09:13:08.887771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:25.170 [2024-10-15 09:13:08.887788] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:25.170 [2024-10-15 09:13:08.887964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.170 pt2 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.170 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.170 "name": "raid_bdev1", 00:13:25.170 "uuid": "459bc17d-33d3-4585-918e-e667ce392b61", 00:13:25.170 "strip_size_kb": 64, 00:13:25.170 "state": "online", 00:13:25.170 "raid_level": "concat", 00:13:25.171 "superblock": true, 00:13:25.171 "num_base_bdevs": 2, 00:13:25.171 "num_base_bdevs_discovered": 2, 00:13:25.171 "num_base_bdevs_operational": 2, 00:13:25.171 "base_bdevs_list": [ 00:13:25.171 { 00:13:25.171 "name": "pt1", 00:13:25.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:25.171 "is_configured": true, 00:13:25.171 "data_offset": 2048, 00:13:25.171 "data_size": 63488 00:13:25.171 }, 00:13:25.171 { 00:13:25.171 "name": "pt2", 00:13:25.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:25.171 "is_configured": true, 00:13:25.171 "data_offset": 2048, 00:13:25.171 "data_size": 63488 00:13:25.171 } 00:13:25.171 ] 00:13:25.171 }' 00:13:25.171 09:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.171 09:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.737 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.738 [2024-10-15 09:13:09.370288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:25.738 "name": "raid_bdev1", 00:13:25.738 "aliases": [ 00:13:25.738 "459bc17d-33d3-4585-918e-e667ce392b61" 00:13:25.738 ], 00:13:25.738 "product_name": "Raid Volume", 00:13:25.738 "block_size": 512, 00:13:25.738 "num_blocks": 126976, 00:13:25.738 "uuid": "459bc17d-33d3-4585-918e-e667ce392b61", 00:13:25.738 "assigned_rate_limits": { 00:13:25.738 "rw_ios_per_sec": 0, 00:13:25.738 "rw_mbytes_per_sec": 0, 00:13:25.738 "r_mbytes_per_sec": 0, 00:13:25.738 "w_mbytes_per_sec": 0 00:13:25.738 }, 00:13:25.738 "claimed": false, 00:13:25.738 "zoned": false, 00:13:25.738 "supported_io_types": { 00:13:25.738 "read": true, 00:13:25.738 "write": true, 00:13:25.738 "unmap": true, 00:13:25.738 "flush": true, 00:13:25.738 "reset": true, 00:13:25.738 "nvme_admin": false, 00:13:25.738 "nvme_io": false, 00:13:25.738 "nvme_io_md": false, 00:13:25.738 "write_zeroes": true, 00:13:25.738 "zcopy": false, 00:13:25.738 "get_zone_info": false, 00:13:25.738 "zone_management": false, 00:13:25.738 "zone_append": false, 00:13:25.738 "compare": false, 00:13:25.738 "compare_and_write": false, 00:13:25.738 "abort": false, 00:13:25.738 "seek_hole": false, 00:13:25.738 "seek_data": false, 00:13:25.738 "copy": false, 00:13:25.738 "nvme_iov_md": false 00:13:25.738 }, 00:13:25.738 "memory_domains": [ 00:13:25.738 { 00:13:25.738 "dma_device_id": "system", 00:13:25.738 "dma_device_type": 1 00:13:25.738 }, 00:13:25.738 { 00:13:25.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.738 "dma_device_type": 2 00:13:25.738 }, 00:13:25.738 { 00:13:25.738 "dma_device_id": "system", 00:13:25.738 "dma_device_type": 1 00:13:25.738 }, 00:13:25.738 { 00:13:25.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.738 "dma_device_type": 2 00:13:25.738 } 00:13:25.738 ], 00:13:25.738 "driver_specific": { 00:13:25.738 "raid": { 00:13:25.738 "uuid": "459bc17d-33d3-4585-918e-e667ce392b61", 00:13:25.738 "strip_size_kb": 64, 00:13:25.738 "state": "online", 00:13:25.738 "raid_level": "concat", 00:13:25.738 "superblock": true, 00:13:25.738 "num_base_bdevs": 2, 00:13:25.738 "num_base_bdevs_discovered": 2, 00:13:25.738 "num_base_bdevs_operational": 2, 00:13:25.738 "base_bdevs_list": [ 00:13:25.738 { 00:13:25.738 "name": "pt1", 00:13:25.738 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:25.738 "is_configured": true, 00:13:25.738 "data_offset": 2048, 00:13:25.738 "data_size": 63488 00:13:25.738 }, 00:13:25.738 { 00:13:25.738 "name": "pt2", 00:13:25.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:25.738 "is_configured": true, 00:13:25.738 "data_offset": 2048, 00:13:25.738 "data_size": 63488 00:13:25.738 } 00:13:25.738 ] 00:13:25.738 } 00:13:25.738 } 00:13:25.738 }' 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:25.738 pt2' 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:25.738 [2024-10-15 09:13:09.630339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.738 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.997 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 459bc17d-33d3-4585-918e-e667ce392b61 '!=' 459bc17d-33d3-4585-918e-e667ce392b61 ']' 00:13:25.997 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:25.997 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:25.997 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:25.997 09:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62356 00:13:25.997 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 62356 ']' 00:13:25.997 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 62356 00:13:25.997 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:25.997 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:25.997 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62356 00:13:25.997 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:25.997 killing process with pid 62356 00:13:25.997 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:25.997 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62356' 00:13:25.997 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 62356 00:13:25.997 [2024-10-15 09:13:09.710322] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:25.997 09:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 62356 00:13:25.997 [2024-10-15 09:13:09.710467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.997 [2024-10-15 09:13:09.710550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.997 [2024-10-15 09:13:09.710571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:25.997 [2024-10-15 09:13:09.911289] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:27.369 ************************************ 00:13:27.369 END TEST raid_superblock_test 00:13:27.369 ************************************ 00:13:27.369 09:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:27.369 00:13:27.369 real 0m4.886s 00:13:27.369 user 0m7.100s 00:13:27.369 sys 0m0.755s 00:13:27.369 09:13:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:27.369 09:13:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.369 09:13:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:13:27.369 09:13:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:27.369 09:13:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:27.369 09:13:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:27.369 ************************************ 00:13:27.369 START TEST raid_read_error_test 00:13:27.369 ************************************ 00:13:27.369 09:13:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:13:27.369 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:27.369 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:27.369 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:27.369 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:27.369 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:27.369 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:27.369 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rAqG9bXZCS 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62567 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62567 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 62567 ']' 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:27.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:27.370 09:13:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.370 [2024-10-15 09:13:11.196399] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:13:27.370 [2024-10-15 09:13:11.196873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62567 ] 00:13:27.670 [2024-10-15 09:13:11.368141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.670 [2024-10-15 09:13:11.512978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.929 [2024-10-15 09:13:11.735328] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.929 [2024-10-15 09:13:11.735656] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.497 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:28.497 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:28.497 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:28.497 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:28.497 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.497 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.497 BaseBdev1_malloc 00:13:28.497 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.498 true 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.498 [2024-10-15 09:13:12.256316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:28.498 [2024-10-15 09:13:12.256405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.498 [2024-10-15 09:13:12.256441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:28.498 [2024-10-15 09:13:12.256460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.498 [2024-10-15 09:13:12.259566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.498 [2024-10-15 09:13:12.259618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:28.498 BaseBdev1 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.498 BaseBdev2_malloc 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.498 true 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.498 [2024-10-15 09:13:12.327923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:28.498 [2024-10-15 09:13:12.328162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.498 [2024-10-15 09:13:12.328239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:28.498 [2024-10-15 09:13:12.328372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.498 [2024-10-15 09:13:12.331418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.498 [2024-10-15 09:13:12.331467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:28.498 BaseBdev2 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.498 [2024-10-15 09:13:12.340194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:28.498 [2024-10-15 09:13:12.342809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:28.498 [2024-10-15 09:13:12.343233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:28.498 [2024-10-15 09:13:12.343267] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:28.498 [2024-10-15 09:13:12.343606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:28.498 [2024-10-15 09:13:12.343848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:28.498 [2024-10-15 09:13:12.343869] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:28.498 [2024-10-15 09:13:12.344158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.498 "name": "raid_bdev1", 00:13:28.498 "uuid": "faeb8820-9c90-4fbb-96bd-9f97af8365ba", 00:13:28.498 "strip_size_kb": 64, 00:13:28.498 "state": "online", 00:13:28.498 "raid_level": "concat", 00:13:28.498 "superblock": true, 00:13:28.498 "num_base_bdevs": 2, 00:13:28.498 "num_base_bdevs_discovered": 2, 00:13:28.498 "num_base_bdevs_operational": 2, 00:13:28.498 "base_bdevs_list": [ 00:13:28.498 { 00:13:28.498 "name": "BaseBdev1", 00:13:28.498 "uuid": "5e10ca37-e371-5ce2-9872-c5215bab6a74", 00:13:28.498 "is_configured": true, 00:13:28.498 "data_offset": 2048, 00:13:28.498 "data_size": 63488 00:13:28.498 }, 00:13:28.498 { 00:13:28.498 "name": "BaseBdev2", 00:13:28.498 "uuid": "62c1cc1e-e3f4-5940-aa58-e6c224fae865", 00:13:28.498 "is_configured": true, 00:13:28.498 "data_offset": 2048, 00:13:28.498 "data_size": 63488 00:13:28.498 } 00:13:28.498 ] 00:13:28.498 }' 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.498 09:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.065 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:29.065 09:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:29.065 [2024-10-15 09:13:12.970014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.001 09:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.001 "name": "raid_bdev1", 00:13:30.001 "uuid": "faeb8820-9c90-4fbb-96bd-9f97af8365ba", 00:13:30.001 "strip_size_kb": 64, 00:13:30.001 "state": "online", 00:13:30.001 "raid_level": "concat", 00:13:30.001 "superblock": true, 00:13:30.001 "num_base_bdevs": 2, 00:13:30.001 "num_base_bdevs_discovered": 2, 00:13:30.001 "num_base_bdevs_operational": 2, 00:13:30.001 "base_bdevs_list": [ 00:13:30.001 { 00:13:30.001 "name": "BaseBdev1", 00:13:30.001 "uuid": "5e10ca37-e371-5ce2-9872-c5215bab6a74", 00:13:30.001 "is_configured": true, 00:13:30.001 "data_offset": 2048, 00:13:30.002 "data_size": 63488 00:13:30.002 }, 00:13:30.002 { 00:13:30.002 "name": "BaseBdev2", 00:13:30.002 "uuid": "62c1cc1e-e3f4-5940-aa58-e6c224fae865", 00:13:30.002 "is_configured": true, 00:13:30.002 "data_offset": 2048, 00:13:30.002 "data_size": 63488 00:13:30.002 } 00:13:30.002 ] 00:13:30.002 }' 00:13:30.002 09:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.002 09:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.571 09:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:30.571 09:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.571 09:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.571 [2024-10-15 09:13:14.396602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:30.571 [2024-10-15 09:13:14.396797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.571 [2024-10-15 09:13:14.400360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.571 [2024-10-15 09:13:14.400549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.571 [2024-10-15 09:13:14.400641] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.571 [2024-10-15 09:13:14.400840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:30.571 { 00:13:30.571 "results": [ 00:13:30.571 { 00:13:30.571 "job": "raid_bdev1", 00:13:30.571 "core_mask": "0x1", 00:13:30.571 "workload": "randrw", 00:13:30.571 "percentage": 50, 00:13:30.571 "status": "finished", 00:13:30.571 "queue_depth": 1, 00:13:30.571 "io_size": 131072, 00:13:30.571 "runtime": 1.42383, 00:13:30.571 "iops": 10072.129397470204, 00:13:30.571 "mibps": 1259.0161746837755, 00:13:30.571 "io_failed": 1, 00:13:30.571 "io_timeout": 0, 00:13:30.571 "avg_latency_us": 139.65857025139135, 00:13:30.571 "min_latency_us": 43.52, 00:13:30.571 "max_latency_us": 1891.6072727272726 00:13:30.571 } 00:13:30.571 ], 00:13:30.571 "core_count": 1 00:13:30.571 } 00:13:30.571 09:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.571 09:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62567 00:13:30.571 09:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 62567 ']' 00:13:30.571 09:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 62567 00:13:30.571 09:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:13:30.571 09:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:30.571 09:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62567 00:13:30.571 killing process with pid 62567 00:13:30.571 09:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:30.571 09:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:30.571 09:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62567' 00:13:30.571 09:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 62567 00:13:30.571 09:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 62567 00:13:30.571 [2024-10-15 09:13:14.439484] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:30.829 [2024-10-15 09:13:14.572438] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:32.204 09:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rAqG9bXZCS 00:13:32.204 09:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:32.204 09:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:32.204 09:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:13:32.204 09:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:32.204 09:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:32.204 09:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:32.204 09:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:13:32.204 00:13:32.204 real 0m4.684s 00:13:32.204 user 0m5.787s 00:13:32.204 sys 0m0.629s 00:13:32.204 09:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.204 09:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.204 ************************************ 00:13:32.204 END TEST raid_read_error_test 00:13:32.204 ************************************ 00:13:32.204 09:13:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:13:32.204 09:13:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:32.204 09:13:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.204 09:13:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:32.204 ************************************ 00:13:32.204 START TEST raid_write_error_test 00:13:32.204 ************************************ 00:13:32.204 09:13:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:13:32.204 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:32.204 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:32.204 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:32.204 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:32.204 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:32.204 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:32.204 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:32.204 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:32.204 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:32.204 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:32.204 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:32.204 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:32.204 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nFpcUE2tqv 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62713 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62713 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 62713 ']' 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.205 09:13:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.205 [2024-10-15 09:13:15.921536] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:13:32.205 [2024-10-15 09:13:15.921694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62713 ] 00:13:32.205 [2024-10-15 09:13:16.091559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.463 [2024-10-15 09:13:16.240961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.722 [2024-10-15 09:13:16.465710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.722 [2024-10-15 09:13:16.466070] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.290 09:13:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.290 09:13:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:33.290 09:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:33.290 09:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:33.290 09:13:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.290 09:13:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.290 BaseBdev1_malloc 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.290 true 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.290 [2024-10-15 09:13:17.023845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:33.290 [2024-10-15 09:13:17.023932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.290 [2024-10-15 09:13:17.023971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:33.290 [2024-10-15 09:13:17.023992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.290 [2024-10-15 09:13:17.027217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.290 [2024-10-15 09:13:17.027405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:33.290 BaseBdev1 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.290 BaseBdev2_malloc 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.290 true 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.290 [2024-10-15 09:13:17.088853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:33.290 [2024-10-15 09:13:17.088929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.290 [2024-10-15 09:13:17.088959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:33.290 [2024-10-15 09:13:17.088978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.290 [2024-10-15 09:13:17.091955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.290 [2024-10-15 09:13:17.092007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:33.290 BaseBdev2 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.290 [2024-10-15 09:13:17.096987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:33.290 [2024-10-15 09:13:17.099601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.290 [2024-10-15 09:13:17.099878] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:33.290 [2024-10-15 09:13:17.099906] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:33.290 [2024-10-15 09:13:17.100268] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:33.290 [2024-10-15 09:13:17.100534] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:33.290 [2024-10-15 09:13:17.100561] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:33.290 [2024-10-15 09:13:17.100775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.290 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.290 "name": "raid_bdev1", 00:13:33.290 "uuid": "7ab0df97-2ca3-49d7-9d8a-39c6f50d1039", 00:13:33.290 "strip_size_kb": 64, 00:13:33.290 "state": "online", 00:13:33.290 "raid_level": "concat", 00:13:33.291 "superblock": true, 00:13:33.291 "num_base_bdevs": 2, 00:13:33.291 "num_base_bdevs_discovered": 2, 00:13:33.291 "num_base_bdevs_operational": 2, 00:13:33.291 "base_bdevs_list": [ 00:13:33.291 { 00:13:33.291 "name": "BaseBdev1", 00:13:33.291 "uuid": "b5c2a16d-5d30-5d74-a5f0-51f88959d1ce", 00:13:33.291 "is_configured": true, 00:13:33.291 "data_offset": 2048, 00:13:33.291 "data_size": 63488 00:13:33.291 }, 00:13:33.291 { 00:13:33.291 "name": "BaseBdev2", 00:13:33.291 "uuid": "5cec3b7e-2f3c-557c-9490-ea00d7f5eb07", 00:13:33.291 "is_configured": true, 00:13:33.291 "data_offset": 2048, 00:13:33.291 "data_size": 63488 00:13:33.291 } 00:13:33.291 ] 00:13:33.291 }' 00:13:33.291 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.291 09:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.860 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:33.860 09:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:33.860 [2024-10-15 09:13:17.754714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.927 "name": "raid_bdev1", 00:13:34.927 "uuid": "7ab0df97-2ca3-49d7-9d8a-39c6f50d1039", 00:13:34.927 "strip_size_kb": 64, 00:13:34.927 "state": "online", 00:13:34.927 "raid_level": "concat", 00:13:34.927 "superblock": true, 00:13:34.927 "num_base_bdevs": 2, 00:13:34.927 "num_base_bdevs_discovered": 2, 00:13:34.927 "num_base_bdevs_operational": 2, 00:13:34.927 "base_bdevs_list": [ 00:13:34.927 { 00:13:34.927 "name": "BaseBdev1", 00:13:34.927 "uuid": "b5c2a16d-5d30-5d74-a5f0-51f88959d1ce", 00:13:34.927 "is_configured": true, 00:13:34.927 "data_offset": 2048, 00:13:34.927 "data_size": 63488 00:13:34.927 }, 00:13:34.927 { 00:13:34.927 "name": "BaseBdev2", 00:13:34.927 "uuid": "5cec3b7e-2f3c-557c-9490-ea00d7f5eb07", 00:13:34.927 "is_configured": true, 00:13:34.927 "data_offset": 2048, 00:13:34.927 "data_size": 63488 00:13:34.927 } 00:13:34.927 ] 00:13:34.927 }' 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.927 09:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.187 09:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:35.187 09:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.187 09:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.187 [2024-10-15 09:13:19.096236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:35.187 [2024-10-15 09:13:19.096447] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.187 [2024-10-15 09:13:19.099957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.187 [2024-10-15 09:13:19.100163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.187 [2024-10-15 09:13:19.100263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.187 [2024-10-15 09:13:19.100510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:35.187 { 00:13:35.187 "results": [ 00:13:35.187 { 00:13:35.187 "job": "raid_bdev1", 00:13:35.187 "core_mask": "0x1", 00:13:35.187 "workload": "randrw", 00:13:35.187 "percentage": 50, 00:13:35.187 "status": "finished", 00:13:35.187 "queue_depth": 1, 00:13:35.187 "io_size": 131072, 00:13:35.187 "runtime": 1.339041, 00:13:35.187 "iops": 9948.911198387503, 00:13:35.187 "mibps": 1243.6138997984378, 00:13:35.187 "io_failed": 1, 00:13:35.187 "io_timeout": 0, 00:13:35.187 "avg_latency_us": 141.4937623931274, 00:13:35.187 "min_latency_us": 43.985454545454544, 00:13:35.187 "max_latency_us": 1891.6072727272726 00:13:35.187 } 00:13:35.187 ], 00:13:35.187 "core_count": 1 00:13:35.187 } 00:13:35.187 09:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.187 09:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62713 00:13:35.187 09:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 62713 ']' 00:13:35.187 09:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 62713 00:13:35.187 09:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:13:35.187 09:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:35.187 09:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62713 00:13:35.445 killing process with pid 62713 00:13:35.445 09:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:35.445 09:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:35.445 09:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62713' 00:13:35.445 09:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 62713 00:13:35.445 09:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 62713 00:13:35.445 [2024-10-15 09:13:19.145526] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:35.445 [2024-10-15 09:13:19.277563] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:36.820 09:13:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nFpcUE2tqv 00:13:36.820 09:13:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:36.820 09:13:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:36.820 09:13:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:13:36.820 09:13:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:36.820 ************************************ 00:13:36.820 END TEST raid_write_error_test 00:13:36.820 ************************************ 00:13:36.820 09:13:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:36.820 09:13:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:36.820 09:13:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:13:36.820 00:13:36.820 real 0m4.639s 00:13:36.820 user 0m5.749s 00:13:36.820 sys 0m0.607s 00:13:36.820 09:13:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:36.820 09:13:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.820 09:13:20 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:36.820 09:13:20 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:13:36.820 09:13:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:36.820 09:13:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:36.820 09:13:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:36.820 ************************************ 00:13:36.820 START TEST raid_state_function_test 00:13:36.820 ************************************ 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:36.820 Process raid pid: 62862 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62862 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62862' 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62862 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 62862 ']' 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:36.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:36.820 09:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.820 [2024-10-15 09:13:20.639396] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:13:36.820 [2024-10-15 09:13:20.639963] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.078 [2024-10-15 09:13:20.811945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.078 [2024-10-15 09:13:20.958857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.335 [2024-10-15 09:13:21.186744] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:37.335 [2024-10-15 09:13:21.187101] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.927 [2024-10-15 09:13:21.615433] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:37.927 [2024-10-15 09:13:21.615501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:37.927 [2024-10-15 09:13:21.615519] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:37.927 [2024-10-15 09:13:21.615535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.927 09:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.927 "name": "Existed_Raid", 00:13:37.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.927 "strip_size_kb": 0, 00:13:37.927 "state": "configuring", 00:13:37.927 "raid_level": "raid1", 00:13:37.927 "superblock": false, 00:13:37.928 "num_base_bdevs": 2, 00:13:37.928 "num_base_bdevs_discovered": 0, 00:13:37.928 "num_base_bdevs_operational": 2, 00:13:37.928 "base_bdevs_list": [ 00:13:37.928 { 00:13:37.928 "name": "BaseBdev1", 00:13:37.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.928 "is_configured": false, 00:13:37.928 "data_offset": 0, 00:13:37.928 "data_size": 0 00:13:37.928 }, 00:13:37.928 { 00:13:37.928 "name": "BaseBdev2", 00:13:37.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.928 "is_configured": false, 00:13:37.928 "data_offset": 0, 00:13:37.928 "data_size": 0 00:13:37.928 } 00:13:37.928 ] 00:13:37.928 }' 00:13:37.928 09:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.928 09:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.186 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:38.186 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.186 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.186 [2024-10-15 09:13:22.103492] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:38.186 [2024-10-15 09:13:22.103727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:38.186 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.186 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:38.186 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.186 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.445 [2024-10-15 09:13:22.115513] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:38.445 [2024-10-15 09:13:22.115580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:38.445 [2024-10-15 09:13:22.115596] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:38.445 [2024-10-15 09:13:22.115617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:38.445 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.445 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:38.445 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.445 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.445 [2024-10-15 09:13:22.163901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:38.445 BaseBdev1 00:13:38.445 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.445 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:38.445 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:38.445 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:38.445 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:38.445 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:38.445 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:38.445 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:38.445 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.445 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.445 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.445 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:38.445 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.445 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.445 [ 00:13:38.445 { 00:13:38.445 "name": "BaseBdev1", 00:13:38.445 "aliases": [ 00:13:38.445 "c5402664-c8fa-488c-bd32-2abe8ee7fb8e" 00:13:38.445 ], 00:13:38.445 "product_name": "Malloc disk", 00:13:38.445 "block_size": 512, 00:13:38.445 "num_blocks": 65536, 00:13:38.445 "uuid": "c5402664-c8fa-488c-bd32-2abe8ee7fb8e", 00:13:38.445 "assigned_rate_limits": { 00:13:38.445 "rw_ios_per_sec": 0, 00:13:38.445 "rw_mbytes_per_sec": 0, 00:13:38.445 "r_mbytes_per_sec": 0, 00:13:38.445 "w_mbytes_per_sec": 0 00:13:38.445 }, 00:13:38.445 "claimed": true, 00:13:38.445 "claim_type": "exclusive_write", 00:13:38.445 "zoned": false, 00:13:38.445 "supported_io_types": { 00:13:38.445 "read": true, 00:13:38.445 "write": true, 00:13:38.445 "unmap": true, 00:13:38.445 "flush": true, 00:13:38.445 "reset": true, 00:13:38.445 "nvme_admin": false, 00:13:38.445 "nvme_io": false, 00:13:38.445 "nvme_io_md": false, 00:13:38.445 "write_zeroes": true, 00:13:38.445 "zcopy": true, 00:13:38.445 "get_zone_info": false, 00:13:38.445 "zone_management": false, 00:13:38.445 "zone_append": false, 00:13:38.445 "compare": false, 00:13:38.445 "compare_and_write": false, 00:13:38.446 "abort": true, 00:13:38.446 "seek_hole": false, 00:13:38.446 "seek_data": false, 00:13:38.446 "copy": true, 00:13:38.446 "nvme_iov_md": false 00:13:38.446 }, 00:13:38.446 "memory_domains": [ 00:13:38.446 { 00:13:38.446 "dma_device_id": "system", 00:13:38.446 "dma_device_type": 1 00:13:38.446 }, 00:13:38.446 { 00:13:38.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.446 "dma_device_type": 2 00:13:38.446 } 00:13:38.446 ], 00:13:38.446 "driver_specific": {} 00:13:38.446 } 00:13:38.446 ] 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.446 "name": "Existed_Raid", 00:13:38.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.446 "strip_size_kb": 0, 00:13:38.446 "state": "configuring", 00:13:38.446 "raid_level": "raid1", 00:13:38.446 "superblock": false, 00:13:38.446 "num_base_bdevs": 2, 00:13:38.446 "num_base_bdevs_discovered": 1, 00:13:38.446 "num_base_bdevs_operational": 2, 00:13:38.446 "base_bdevs_list": [ 00:13:38.446 { 00:13:38.446 "name": "BaseBdev1", 00:13:38.446 "uuid": "c5402664-c8fa-488c-bd32-2abe8ee7fb8e", 00:13:38.446 "is_configured": true, 00:13:38.446 "data_offset": 0, 00:13:38.446 "data_size": 65536 00:13:38.446 }, 00:13:38.446 { 00:13:38.446 "name": "BaseBdev2", 00:13:38.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.446 "is_configured": false, 00:13:38.446 "data_offset": 0, 00:13:38.446 "data_size": 0 00:13:38.446 } 00:13:38.446 ] 00:13:38.446 }' 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.446 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.013 [2024-10-15 09:13:22.700108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:39.013 [2024-10-15 09:13:22.700203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.013 [2024-10-15 09:13:22.712230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.013 [2024-10-15 09:13:22.715091] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:39.013 [2024-10-15 09:13:22.715317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.013 "name": "Existed_Raid", 00:13:39.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.013 "strip_size_kb": 0, 00:13:39.013 "state": "configuring", 00:13:39.013 "raid_level": "raid1", 00:13:39.013 "superblock": false, 00:13:39.013 "num_base_bdevs": 2, 00:13:39.013 "num_base_bdevs_discovered": 1, 00:13:39.013 "num_base_bdevs_operational": 2, 00:13:39.013 "base_bdevs_list": [ 00:13:39.013 { 00:13:39.013 "name": "BaseBdev1", 00:13:39.013 "uuid": "c5402664-c8fa-488c-bd32-2abe8ee7fb8e", 00:13:39.013 "is_configured": true, 00:13:39.013 "data_offset": 0, 00:13:39.013 "data_size": 65536 00:13:39.013 }, 00:13:39.013 { 00:13:39.013 "name": "BaseBdev2", 00:13:39.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.013 "is_configured": false, 00:13:39.013 "data_offset": 0, 00:13:39.013 "data_size": 0 00:13:39.013 } 00:13:39.013 ] 00:13:39.013 }' 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.013 09:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.580 [2024-10-15 09:13:23.293916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:39.580 [2024-10-15 09:13:23.294392] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:39.580 [2024-10-15 09:13:23.294418] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:39.580 [2024-10-15 09:13:23.294805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:39.580 [2024-10-15 09:13:23.295043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:39.580 [2024-10-15 09:13:23.295068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:39.580 [2024-10-15 09:13:23.295445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.580 BaseBdev2 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.580 [ 00:13:39.580 { 00:13:39.580 "name": "BaseBdev2", 00:13:39.580 "aliases": [ 00:13:39.580 "fea72e02-03b4-4d32-8549-4aab01584bb4" 00:13:39.580 ], 00:13:39.580 "product_name": "Malloc disk", 00:13:39.580 "block_size": 512, 00:13:39.580 "num_blocks": 65536, 00:13:39.580 "uuid": "fea72e02-03b4-4d32-8549-4aab01584bb4", 00:13:39.580 "assigned_rate_limits": { 00:13:39.580 "rw_ios_per_sec": 0, 00:13:39.580 "rw_mbytes_per_sec": 0, 00:13:39.580 "r_mbytes_per_sec": 0, 00:13:39.580 "w_mbytes_per_sec": 0 00:13:39.580 }, 00:13:39.580 "claimed": true, 00:13:39.580 "claim_type": "exclusive_write", 00:13:39.580 "zoned": false, 00:13:39.580 "supported_io_types": { 00:13:39.580 "read": true, 00:13:39.580 "write": true, 00:13:39.580 "unmap": true, 00:13:39.580 "flush": true, 00:13:39.580 "reset": true, 00:13:39.580 "nvme_admin": false, 00:13:39.580 "nvme_io": false, 00:13:39.580 "nvme_io_md": false, 00:13:39.580 "write_zeroes": true, 00:13:39.580 "zcopy": true, 00:13:39.580 "get_zone_info": false, 00:13:39.580 "zone_management": false, 00:13:39.580 "zone_append": false, 00:13:39.580 "compare": false, 00:13:39.580 "compare_and_write": false, 00:13:39.580 "abort": true, 00:13:39.580 "seek_hole": false, 00:13:39.580 "seek_data": false, 00:13:39.580 "copy": true, 00:13:39.580 "nvme_iov_md": false 00:13:39.580 }, 00:13:39.580 "memory_domains": [ 00:13:39.580 { 00:13:39.580 "dma_device_id": "system", 00:13:39.580 "dma_device_type": 1 00:13:39.580 }, 00:13:39.580 { 00:13:39.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.580 "dma_device_type": 2 00:13:39.580 } 00:13:39.580 ], 00:13:39.580 "driver_specific": {} 00:13:39.580 } 00:13:39.580 ] 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.580 "name": "Existed_Raid", 00:13:39.580 "uuid": "b4766bb1-6f25-4090-8b2d-54b2895b6b0f", 00:13:39.580 "strip_size_kb": 0, 00:13:39.580 "state": "online", 00:13:39.580 "raid_level": "raid1", 00:13:39.580 "superblock": false, 00:13:39.580 "num_base_bdevs": 2, 00:13:39.580 "num_base_bdevs_discovered": 2, 00:13:39.580 "num_base_bdevs_operational": 2, 00:13:39.580 "base_bdevs_list": [ 00:13:39.580 { 00:13:39.580 "name": "BaseBdev1", 00:13:39.580 "uuid": "c5402664-c8fa-488c-bd32-2abe8ee7fb8e", 00:13:39.580 "is_configured": true, 00:13:39.580 "data_offset": 0, 00:13:39.580 "data_size": 65536 00:13:39.580 }, 00:13:39.580 { 00:13:39.580 "name": "BaseBdev2", 00:13:39.580 "uuid": "fea72e02-03b4-4d32-8549-4aab01584bb4", 00:13:39.580 "is_configured": true, 00:13:39.580 "data_offset": 0, 00:13:39.580 "data_size": 65536 00:13:39.580 } 00:13:39.580 ] 00:13:39.580 }' 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.580 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.147 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:40.147 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:40.147 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:40.147 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:40.147 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:40.147 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:40.147 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:40.147 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:40.147 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.147 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.147 [2024-10-15 09:13:23.846530] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:40.147 09:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.147 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:40.147 "name": "Existed_Raid", 00:13:40.147 "aliases": [ 00:13:40.147 "b4766bb1-6f25-4090-8b2d-54b2895b6b0f" 00:13:40.147 ], 00:13:40.147 "product_name": "Raid Volume", 00:13:40.147 "block_size": 512, 00:13:40.147 "num_blocks": 65536, 00:13:40.147 "uuid": "b4766bb1-6f25-4090-8b2d-54b2895b6b0f", 00:13:40.147 "assigned_rate_limits": { 00:13:40.147 "rw_ios_per_sec": 0, 00:13:40.147 "rw_mbytes_per_sec": 0, 00:13:40.147 "r_mbytes_per_sec": 0, 00:13:40.147 "w_mbytes_per_sec": 0 00:13:40.147 }, 00:13:40.147 "claimed": false, 00:13:40.147 "zoned": false, 00:13:40.147 "supported_io_types": { 00:13:40.147 "read": true, 00:13:40.147 "write": true, 00:13:40.147 "unmap": false, 00:13:40.147 "flush": false, 00:13:40.147 "reset": true, 00:13:40.147 "nvme_admin": false, 00:13:40.147 "nvme_io": false, 00:13:40.147 "nvme_io_md": false, 00:13:40.148 "write_zeroes": true, 00:13:40.148 "zcopy": false, 00:13:40.148 "get_zone_info": false, 00:13:40.148 "zone_management": false, 00:13:40.148 "zone_append": false, 00:13:40.148 "compare": false, 00:13:40.148 "compare_and_write": false, 00:13:40.148 "abort": false, 00:13:40.148 "seek_hole": false, 00:13:40.148 "seek_data": false, 00:13:40.148 "copy": false, 00:13:40.148 "nvme_iov_md": false 00:13:40.148 }, 00:13:40.148 "memory_domains": [ 00:13:40.148 { 00:13:40.148 "dma_device_id": "system", 00:13:40.148 "dma_device_type": 1 00:13:40.148 }, 00:13:40.148 { 00:13:40.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.148 "dma_device_type": 2 00:13:40.148 }, 00:13:40.148 { 00:13:40.148 "dma_device_id": "system", 00:13:40.148 "dma_device_type": 1 00:13:40.148 }, 00:13:40.148 { 00:13:40.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.148 "dma_device_type": 2 00:13:40.148 } 00:13:40.148 ], 00:13:40.148 "driver_specific": { 00:13:40.148 "raid": { 00:13:40.148 "uuid": "b4766bb1-6f25-4090-8b2d-54b2895b6b0f", 00:13:40.148 "strip_size_kb": 0, 00:13:40.148 "state": "online", 00:13:40.148 "raid_level": "raid1", 00:13:40.148 "superblock": false, 00:13:40.148 "num_base_bdevs": 2, 00:13:40.148 "num_base_bdevs_discovered": 2, 00:13:40.148 "num_base_bdevs_operational": 2, 00:13:40.148 "base_bdevs_list": [ 00:13:40.148 { 00:13:40.148 "name": "BaseBdev1", 00:13:40.148 "uuid": "c5402664-c8fa-488c-bd32-2abe8ee7fb8e", 00:13:40.148 "is_configured": true, 00:13:40.148 "data_offset": 0, 00:13:40.148 "data_size": 65536 00:13:40.148 }, 00:13:40.148 { 00:13:40.148 "name": "BaseBdev2", 00:13:40.148 "uuid": "fea72e02-03b4-4d32-8549-4aab01584bb4", 00:13:40.148 "is_configured": true, 00:13:40.148 "data_offset": 0, 00:13:40.148 "data_size": 65536 00:13:40.148 } 00:13:40.148 ] 00:13:40.148 } 00:13:40.148 } 00:13:40.148 }' 00:13:40.148 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:40.148 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:40.148 BaseBdev2' 00:13:40.148 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.148 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:40.148 09:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.148 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:40.148 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.148 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.148 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.148 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.148 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.148 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.148 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.148 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:40.148 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.148 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.148 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.148 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.407 [2024-10-15 09:13:24.110329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.407 "name": "Existed_Raid", 00:13:40.407 "uuid": "b4766bb1-6f25-4090-8b2d-54b2895b6b0f", 00:13:40.407 "strip_size_kb": 0, 00:13:40.407 "state": "online", 00:13:40.407 "raid_level": "raid1", 00:13:40.407 "superblock": false, 00:13:40.407 "num_base_bdevs": 2, 00:13:40.407 "num_base_bdevs_discovered": 1, 00:13:40.407 "num_base_bdevs_operational": 1, 00:13:40.407 "base_bdevs_list": [ 00:13:40.407 { 00:13:40.407 "name": null, 00:13:40.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.407 "is_configured": false, 00:13:40.407 "data_offset": 0, 00:13:40.407 "data_size": 65536 00:13:40.407 }, 00:13:40.407 { 00:13:40.407 "name": "BaseBdev2", 00:13:40.407 "uuid": "fea72e02-03b4-4d32-8549-4aab01584bb4", 00:13:40.407 "is_configured": true, 00:13:40.407 "data_offset": 0, 00:13:40.407 "data_size": 65536 00:13:40.407 } 00:13:40.407 ] 00:13:40.407 }' 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.407 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.976 [2024-10-15 09:13:24.771593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:40.976 [2024-10-15 09:13:24.771736] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:40.976 [2024-10-15 09:13:24.865437] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.976 [2024-10-15 09:13:24.865768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.976 [2024-10-15 09:13:24.865931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.976 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.242 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:41.242 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:41.242 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:41.242 09:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62862 00:13:41.242 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 62862 ']' 00:13:41.242 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 62862 00:13:41.242 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:41.242 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:41.242 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62862 00:13:41.242 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:41.242 killing process with pid 62862 00:13:41.242 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:41.242 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62862' 00:13:41.242 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 62862 00:13:41.242 [2024-10-15 09:13:24.954234] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:41.242 09:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 62862 00:13:41.242 [2024-10-15 09:13:24.969749] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:42.177 09:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:42.177 00:13:42.177 real 0m5.577s 00:13:42.177 user 0m8.277s 00:13:42.177 sys 0m0.859s 00:13:42.177 09:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.177 ************************************ 00:13:42.177 END TEST raid_state_function_test 00:13:42.177 ************************************ 00:13:42.177 09:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.436 09:13:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:13:42.436 09:13:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:42.436 09:13:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.436 09:13:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:42.436 ************************************ 00:13:42.436 START TEST raid_state_function_test_sb 00:13:42.436 ************************************ 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:42.436 Process raid pid: 63115 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63115 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63115' 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63115 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 63115 ']' 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:42.436 09:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.436 [2024-10-15 09:13:26.241023] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:13:42.436 [2024-10-15 09:13:26.242198] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.695 [2024-10-15 09:13:26.412802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.695 [2024-10-15 09:13:26.560332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.953 [2024-10-15 09:13:26.786364] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.954 [2024-10-15 09:13:26.786679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.520 [2024-10-15 09:13:27.295758] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:43.520 [2024-10-15 09:13:27.295838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:43.520 [2024-10-15 09:13:27.295856] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:43.520 [2024-10-15 09:13:27.295874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.520 "name": "Existed_Raid", 00:13:43.520 "uuid": "65f53ecc-3394-4a79-b5dd-1467cd35a0f0", 00:13:43.520 "strip_size_kb": 0, 00:13:43.520 "state": "configuring", 00:13:43.520 "raid_level": "raid1", 00:13:43.520 "superblock": true, 00:13:43.520 "num_base_bdevs": 2, 00:13:43.520 "num_base_bdevs_discovered": 0, 00:13:43.520 "num_base_bdevs_operational": 2, 00:13:43.520 "base_bdevs_list": [ 00:13:43.520 { 00:13:43.520 "name": "BaseBdev1", 00:13:43.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.520 "is_configured": false, 00:13:43.520 "data_offset": 0, 00:13:43.520 "data_size": 0 00:13:43.520 }, 00:13:43.520 { 00:13:43.520 "name": "BaseBdev2", 00:13:43.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.520 "is_configured": false, 00:13:43.520 "data_offset": 0, 00:13:43.520 "data_size": 0 00:13:43.520 } 00:13:43.520 ] 00:13:43.520 }' 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.520 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.088 [2024-10-15 09:13:27.827791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.088 [2024-10-15 09:13:27.827840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.088 [2024-10-15 09:13:27.835838] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:44.088 [2024-10-15 09:13:27.835902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:44.088 [2024-10-15 09:13:27.835919] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.088 [2024-10-15 09:13:27.835939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.088 [2024-10-15 09:13:27.884434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.088 BaseBdev1 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.088 [ 00:13:44.088 { 00:13:44.088 "name": "BaseBdev1", 00:13:44.088 "aliases": [ 00:13:44.088 "73ce9f56-86f3-403c-8326-c060e091f365" 00:13:44.088 ], 00:13:44.088 "product_name": "Malloc disk", 00:13:44.088 "block_size": 512, 00:13:44.088 "num_blocks": 65536, 00:13:44.088 "uuid": "73ce9f56-86f3-403c-8326-c060e091f365", 00:13:44.088 "assigned_rate_limits": { 00:13:44.088 "rw_ios_per_sec": 0, 00:13:44.088 "rw_mbytes_per_sec": 0, 00:13:44.088 "r_mbytes_per_sec": 0, 00:13:44.088 "w_mbytes_per_sec": 0 00:13:44.088 }, 00:13:44.088 "claimed": true, 00:13:44.088 "claim_type": "exclusive_write", 00:13:44.088 "zoned": false, 00:13:44.088 "supported_io_types": { 00:13:44.088 "read": true, 00:13:44.088 "write": true, 00:13:44.088 "unmap": true, 00:13:44.088 "flush": true, 00:13:44.088 "reset": true, 00:13:44.088 "nvme_admin": false, 00:13:44.088 "nvme_io": false, 00:13:44.088 "nvme_io_md": false, 00:13:44.088 "write_zeroes": true, 00:13:44.088 "zcopy": true, 00:13:44.088 "get_zone_info": false, 00:13:44.088 "zone_management": false, 00:13:44.088 "zone_append": false, 00:13:44.088 "compare": false, 00:13:44.088 "compare_and_write": false, 00:13:44.088 "abort": true, 00:13:44.088 "seek_hole": false, 00:13:44.088 "seek_data": false, 00:13:44.088 "copy": true, 00:13:44.088 "nvme_iov_md": false 00:13:44.088 }, 00:13:44.088 "memory_domains": [ 00:13:44.088 { 00:13:44.088 "dma_device_id": "system", 00:13:44.088 "dma_device_type": 1 00:13:44.088 }, 00:13:44.088 { 00:13:44.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.088 "dma_device_type": 2 00:13:44.088 } 00:13:44.088 ], 00:13:44.088 "driver_specific": {} 00:13:44.088 } 00:13:44.088 ] 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.088 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:44.089 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:44.089 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.089 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.089 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.089 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.089 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:44.089 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.089 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.089 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.089 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.089 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.089 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.089 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.089 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.089 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.089 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.089 "name": "Existed_Raid", 00:13:44.089 "uuid": "20e3750f-a11a-4602-a70f-edbb634f52d0", 00:13:44.089 "strip_size_kb": 0, 00:13:44.089 "state": "configuring", 00:13:44.089 "raid_level": "raid1", 00:13:44.089 "superblock": true, 00:13:44.089 "num_base_bdevs": 2, 00:13:44.089 "num_base_bdevs_discovered": 1, 00:13:44.089 "num_base_bdevs_operational": 2, 00:13:44.089 "base_bdevs_list": [ 00:13:44.089 { 00:13:44.089 "name": "BaseBdev1", 00:13:44.089 "uuid": "73ce9f56-86f3-403c-8326-c060e091f365", 00:13:44.089 "is_configured": true, 00:13:44.089 "data_offset": 2048, 00:13:44.089 "data_size": 63488 00:13:44.089 }, 00:13:44.089 { 00:13:44.089 "name": "BaseBdev2", 00:13:44.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.089 "is_configured": false, 00:13:44.089 "data_offset": 0, 00:13:44.089 "data_size": 0 00:13:44.089 } 00:13:44.089 ] 00:13:44.089 }' 00:13:44.089 09:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.089 09:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.655 [2024-10-15 09:13:28.444680] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.655 [2024-10-15 09:13:28.444905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.655 [2024-10-15 09:13:28.456817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.655 [2024-10-15 09:13:28.459581] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.655 [2024-10-15 09:13:28.459652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.655 "name": "Existed_Raid", 00:13:44.655 "uuid": "ed595dda-176f-4147-8d9b-8898bc166c2b", 00:13:44.655 "strip_size_kb": 0, 00:13:44.655 "state": "configuring", 00:13:44.655 "raid_level": "raid1", 00:13:44.655 "superblock": true, 00:13:44.655 "num_base_bdevs": 2, 00:13:44.655 "num_base_bdevs_discovered": 1, 00:13:44.655 "num_base_bdevs_operational": 2, 00:13:44.655 "base_bdevs_list": [ 00:13:44.655 { 00:13:44.655 "name": "BaseBdev1", 00:13:44.655 "uuid": "73ce9f56-86f3-403c-8326-c060e091f365", 00:13:44.655 "is_configured": true, 00:13:44.655 "data_offset": 2048, 00:13:44.655 "data_size": 63488 00:13:44.655 }, 00:13:44.655 { 00:13:44.655 "name": "BaseBdev2", 00:13:44.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.655 "is_configured": false, 00:13:44.655 "data_offset": 0, 00:13:44.655 "data_size": 0 00:13:44.655 } 00:13:44.655 ] 00:13:44.655 }' 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.655 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.222 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:45.222 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.222 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.222 [2024-10-15 09:13:28.994869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.222 [2024-10-15 09:13:28.995287] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:45.222 [2024-10-15 09:13:28.995310] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:45.222 [2024-10-15 09:13:28.995656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:45.222 BaseBdev2 00:13:45.222 [2024-10-15 09:13:28.995881] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:45.222 [2024-10-15 09:13:28.995904] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:45.222 [2024-10-15 09:13:28.996089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.222 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.222 09:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:45.222 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:45.222 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:45.222 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:45.222 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:45.222 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:45.222 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:45.222 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.222 09:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.222 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.222 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:45.222 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.222 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.222 [ 00:13:45.222 { 00:13:45.222 "name": "BaseBdev2", 00:13:45.222 "aliases": [ 00:13:45.222 "9f5aafff-2b1e-4951-96fa-caa23e38f282" 00:13:45.222 ], 00:13:45.222 "product_name": "Malloc disk", 00:13:45.222 "block_size": 512, 00:13:45.222 "num_blocks": 65536, 00:13:45.222 "uuid": "9f5aafff-2b1e-4951-96fa-caa23e38f282", 00:13:45.222 "assigned_rate_limits": { 00:13:45.222 "rw_ios_per_sec": 0, 00:13:45.223 "rw_mbytes_per_sec": 0, 00:13:45.223 "r_mbytes_per_sec": 0, 00:13:45.223 "w_mbytes_per_sec": 0 00:13:45.223 }, 00:13:45.223 "claimed": true, 00:13:45.223 "claim_type": "exclusive_write", 00:13:45.223 "zoned": false, 00:13:45.223 "supported_io_types": { 00:13:45.223 "read": true, 00:13:45.223 "write": true, 00:13:45.223 "unmap": true, 00:13:45.223 "flush": true, 00:13:45.223 "reset": true, 00:13:45.223 "nvme_admin": false, 00:13:45.223 "nvme_io": false, 00:13:45.223 "nvme_io_md": false, 00:13:45.223 "write_zeroes": true, 00:13:45.223 "zcopy": true, 00:13:45.223 "get_zone_info": false, 00:13:45.223 "zone_management": false, 00:13:45.223 "zone_append": false, 00:13:45.223 "compare": false, 00:13:45.223 "compare_and_write": false, 00:13:45.223 "abort": true, 00:13:45.223 "seek_hole": false, 00:13:45.223 "seek_data": false, 00:13:45.223 "copy": true, 00:13:45.223 "nvme_iov_md": false 00:13:45.223 }, 00:13:45.223 "memory_domains": [ 00:13:45.223 { 00:13:45.223 "dma_device_id": "system", 00:13:45.223 "dma_device_type": 1 00:13:45.223 }, 00:13:45.223 { 00:13:45.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.223 "dma_device_type": 2 00:13:45.223 } 00:13:45.223 ], 00:13:45.223 "driver_specific": {} 00:13:45.223 } 00:13:45.223 ] 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.223 "name": "Existed_Raid", 00:13:45.223 "uuid": "ed595dda-176f-4147-8d9b-8898bc166c2b", 00:13:45.223 "strip_size_kb": 0, 00:13:45.223 "state": "online", 00:13:45.223 "raid_level": "raid1", 00:13:45.223 "superblock": true, 00:13:45.223 "num_base_bdevs": 2, 00:13:45.223 "num_base_bdevs_discovered": 2, 00:13:45.223 "num_base_bdevs_operational": 2, 00:13:45.223 "base_bdevs_list": [ 00:13:45.223 { 00:13:45.223 "name": "BaseBdev1", 00:13:45.223 "uuid": "73ce9f56-86f3-403c-8326-c060e091f365", 00:13:45.223 "is_configured": true, 00:13:45.223 "data_offset": 2048, 00:13:45.223 "data_size": 63488 00:13:45.223 }, 00:13:45.223 { 00:13:45.223 "name": "BaseBdev2", 00:13:45.223 "uuid": "9f5aafff-2b1e-4951-96fa-caa23e38f282", 00:13:45.223 "is_configured": true, 00:13:45.223 "data_offset": 2048, 00:13:45.223 "data_size": 63488 00:13:45.223 } 00:13:45.223 ] 00:13:45.223 }' 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.223 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.791 [2024-10-15 09:13:29.547455] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:45.791 "name": "Existed_Raid", 00:13:45.791 "aliases": [ 00:13:45.791 "ed595dda-176f-4147-8d9b-8898bc166c2b" 00:13:45.791 ], 00:13:45.791 "product_name": "Raid Volume", 00:13:45.791 "block_size": 512, 00:13:45.791 "num_blocks": 63488, 00:13:45.791 "uuid": "ed595dda-176f-4147-8d9b-8898bc166c2b", 00:13:45.791 "assigned_rate_limits": { 00:13:45.791 "rw_ios_per_sec": 0, 00:13:45.791 "rw_mbytes_per_sec": 0, 00:13:45.791 "r_mbytes_per_sec": 0, 00:13:45.791 "w_mbytes_per_sec": 0 00:13:45.791 }, 00:13:45.791 "claimed": false, 00:13:45.791 "zoned": false, 00:13:45.791 "supported_io_types": { 00:13:45.791 "read": true, 00:13:45.791 "write": true, 00:13:45.791 "unmap": false, 00:13:45.791 "flush": false, 00:13:45.791 "reset": true, 00:13:45.791 "nvme_admin": false, 00:13:45.791 "nvme_io": false, 00:13:45.791 "nvme_io_md": false, 00:13:45.791 "write_zeroes": true, 00:13:45.791 "zcopy": false, 00:13:45.791 "get_zone_info": false, 00:13:45.791 "zone_management": false, 00:13:45.791 "zone_append": false, 00:13:45.791 "compare": false, 00:13:45.791 "compare_and_write": false, 00:13:45.791 "abort": false, 00:13:45.791 "seek_hole": false, 00:13:45.791 "seek_data": false, 00:13:45.791 "copy": false, 00:13:45.791 "nvme_iov_md": false 00:13:45.791 }, 00:13:45.791 "memory_domains": [ 00:13:45.791 { 00:13:45.791 "dma_device_id": "system", 00:13:45.791 "dma_device_type": 1 00:13:45.791 }, 00:13:45.791 { 00:13:45.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.791 "dma_device_type": 2 00:13:45.791 }, 00:13:45.791 { 00:13:45.791 "dma_device_id": "system", 00:13:45.791 "dma_device_type": 1 00:13:45.791 }, 00:13:45.791 { 00:13:45.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.791 "dma_device_type": 2 00:13:45.791 } 00:13:45.791 ], 00:13:45.791 "driver_specific": { 00:13:45.791 "raid": { 00:13:45.791 "uuid": "ed595dda-176f-4147-8d9b-8898bc166c2b", 00:13:45.791 "strip_size_kb": 0, 00:13:45.791 "state": "online", 00:13:45.791 "raid_level": "raid1", 00:13:45.791 "superblock": true, 00:13:45.791 "num_base_bdevs": 2, 00:13:45.791 "num_base_bdevs_discovered": 2, 00:13:45.791 "num_base_bdevs_operational": 2, 00:13:45.791 "base_bdevs_list": [ 00:13:45.791 { 00:13:45.791 "name": "BaseBdev1", 00:13:45.791 "uuid": "73ce9f56-86f3-403c-8326-c060e091f365", 00:13:45.791 "is_configured": true, 00:13:45.791 "data_offset": 2048, 00:13:45.791 "data_size": 63488 00:13:45.791 }, 00:13:45.791 { 00:13:45.791 "name": "BaseBdev2", 00:13:45.791 "uuid": "9f5aafff-2b1e-4951-96fa-caa23e38f282", 00:13:45.791 "is_configured": true, 00:13:45.791 "data_offset": 2048, 00:13:45.791 "data_size": 63488 00:13:45.791 } 00:13:45.791 ] 00:13:45.791 } 00:13:45.791 } 00:13:45.791 }' 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:45.791 BaseBdev2' 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.791 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.050 [2024-10-15 09:13:29.783241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.050 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.050 "name": "Existed_Raid", 00:13:46.051 "uuid": "ed595dda-176f-4147-8d9b-8898bc166c2b", 00:13:46.051 "strip_size_kb": 0, 00:13:46.051 "state": "online", 00:13:46.051 "raid_level": "raid1", 00:13:46.051 "superblock": true, 00:13:46.051 "num_base_bdevs": 2, 00:13:46.051 "num_base_bdevs_discovered": 1, 00:13:46.051 "num_base_bdevs_operational": 1, 00:13:46.051 "base_bdevs_list": [ 00:13:46.051 { 00:13:46.051 "name": null, 00:13:46.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.051 "is_configured": false, 00:13:46.051 "data_offset": 0, 00:13:46.051 "data_size": 63488 00:13:46.051 }, 00:13:46.051 { 00:13:46.051 "name": "BaseBdev2", 00:13:46.051 "uuid": "9f5aafff-2b1e-4951-96fa-caa23e38f282", 00:13:46.051 "is_configured": true, 00:13:46.051 "data_offset": 2048, 00:13:46.051 "data_size": 63488 00:13:46.051 } 00:13:46.051 ] 00:13:46.051 }' 00:13:46.051 09:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.051 09:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.617 [2024-10-15 09:13:30.428986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:46.617 [2024-10-15 09:13:30.429301] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:46.617 [2024-10-15 09:13:30.522859] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.617 [2024-10-15 09:13:30.522960] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.617 [2024-10-15 09:13:30.522982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.617 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.876 09:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:46.876 09:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:46.876 09:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:46.876 09:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63115 00:13:46.876 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 63115 ']' 00:13:46.876 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 63115 00:13:46.876 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:46.876 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:46.876 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63115 00:13:46.876 killing process with pid 63115 00:13:46.876 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:46.876 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:46.876 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63115' 00:13:46.876 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 63115 00:13:46.876 09:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 63115 00:13:46.876 [2024-10-15 09:13:30.610799] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:46.876 [2024-10-15 09:13:30.626111] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:47.812 09:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:47.812 00:13:47.812 real 0m5.594s 00:13:47.812 user 0m8.364s 00:13:47.812 sys 0m0.851s 00:13:47.812 ************************************ 00:13:47.812 END TEST raid_state_function_test_sb 00:13:47.812 ************************************ 00:13:47.812 09:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:47.812 09:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.072 09:13:31 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:13:48.072 09:13:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:48.072 09:13:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.072 09:13:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:48.072 ************************************ 00:13:48.072 START TEST raid_superblock_test 00:13:48.072 ************************************ 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63373 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63373 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 63373 ']' 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:48.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:48.072 09:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.072 [2024-10-15 09:13:31.918042] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:13:48.072 [2024-10-15 09:13:31.919172] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63373 ] 00:13:48.331 [2024-10-15 09:13:32.084698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.331 [2024-10-15 09:13:32.231299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.590 [2024-10-15 09:13:32.455029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:48.590 [2024-10-15 09:13:32.455467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.157 09:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:49.157 09:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:49.157 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:49.157 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:49.157 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:49.157 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:49.157 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:49.157 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:49.157 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:49.157 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:49.157 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:49.157 09:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.158 malloc1 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.158 [2024-10-15 09:13:32.942889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:49.158 [2024-10-15 09:13:32.943193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.158 [2024-10-15 09:13:32.943282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:49.158 [2024-10-15 09:13:32.943510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.158 [2024-10-15 09:13:32.946784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.158 [2024-10-15 09:13:32.946956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:49.158 pt1 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.158 malloc2 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.158 09:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.158 [2024-10-15 09:13:32.999520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:49.158 [2024-10-15 09:13:32.999741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.158 [2024-10-15 09:13:32.999789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:49.158 [2024-10-15 09:13:32.999806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.158 [2024-10-15 09:13:33.002899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.158 [2024-10-15 09:13:33.003064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:49.158 pt2 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.158 [2024-10-15 09:13:33.011856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:49.158 [2024-10-15 09:13:33.014513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:49.158 [2024-10-15 09:13:33.014757] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:49.158 [2024-10-15 09:13:33.014777] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:49.158 [2024-10-15 09:13:33.015154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:49.158 [2024-10-15 09:13:33.015377] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:49.158 [2024-10-15 09:13:33.015399] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:49.158 [2024-10-15 09:13:33.015626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.158 "name": "raid_bdev1", 00:13:49.158 "uuid": "7628bdb2-7eaf-4509-aba8-b960e79992e7", 00:13:49.158 "strip_size_kb": 0, 00:13:49.158 "state": "online", 00:13:49.158 "raid_level": "raid1", 00:13:49.158 "superblock": true, 00:13:49.158 "num_base_bdevs": 2, 00:13:49.158 "num_base_bdevs_discovered": 2, 00:13:49.158 "num_base_bdevs_operational": 2, 00:13:49.158 "base_bdevs_list": [ 00:13:49.158 { 00:13:49.158 "name": "pt1", 00:13:49.158 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:49.158 "is_configured": true, 00:13:49.158 "data_offset": 2048, 00:13:49.158 "data_size": 63488 00:13:49.158 }, 00:13:49.158 { 00:13:49.158 "name": "pt2", 00:13:49.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:49.158 "is_configured": true, 00:13:49.158 "data_offset": 2048, 00:13:49.158 "data_size": 63488 00:13:49.158 } 00:13:49.158 ] 00:13:49.158 }' 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.158 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.726 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:49.726 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:49.726 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:49.726 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:49.726 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:49.726 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:49.726 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:49.726 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:49.726 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.726 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.726 [2024-10-15 09:13:33.556342] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.726 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.726 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:49.726 "name": "raid_bdev1", 00:13:49.726 "aliases": [ 00:13:49.726 "7628bdb2-7eaf-4509-aba8-b960e79992e7" 00:13:49.726 ], 00:13:49.726 "product_name": "Raid Volume", 00:13:49.726 "block_size": 512, 00:13:49.726 "num_blocks": 63488, 00:13:49.726 "uuid": "7628bdb2-7eaf-4509-aba8-b960e79992e7", 00:13:49.726 "assigned_rate_limits": { 00:13:49.726 "rw_ios_per_sec": 0, 00:13:49.726 "rw_mbytes_per_sec": 0, 00:13:49.726 "r_mbytes_per_sec": 0, 00:13:49.726 "w_mbytes_per_sec": 0 00:13:49.726 }, 00:13:49.726 "claimed": false, 00:13:49.726 "zoned": false, 00:13:49.726 "supported_io_types": { 00:13:49.726 "read": true, 00:13:49.726 "write": true, 00:13:49.726 "unmap": false, 00:13:49.726 "flush": false, 00:13:49.726 "reset": true, 00:13:49.726 "nvme_admin": false, 00:13:49.726 "nvme_io": false, 00:13:49.726 "nvme_io_md": false, 00:13:49.726 "write_zeroes": true, 00:13:49.726 "zcopy": false, 00:13:49.726 "get_zone_info": false, 00:13:49.726 "zone_management": false, 00:13:49.726 "zone_append": false, 00:13:49.726 "compare": false, 00:13:49.726 "compare_and_write": false, 00:13:49.726 "abort": false, 00:13:49.726 "seek_hole": false, 00:13:49.726 "seek_data": false, 00:13:49.726 "copy": false, 00:13:49.726 "nvme_iov_md": false 00:13:49.726 }, 00:13:49.726 "memory_domains": [ 00:13:49.726 { 00:13:49.726 "dma_device_id": "system", 00:13:49.726 "dma_device_type": 1 00:13:49.726 }, 00:13:49.726 { 00:13:49.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.726 "dma_device_type": 2 00:13:49.726 }, 00:13:49.726 { 00:13:49.726 "dma_device_id": "system", 00:13:49.726 "dma_device_type": 1 00:13:49.726 }, 00:13:49.726 { 00:13:49.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.726 "dma_device_type": 2 00:13:49.726 } 00:13:49.726 ], 00:13:49.726 "driver_specific": { 00:13:49.726 "raid": { 00:13:49.726 "uuid": "7628bdb2-7eaf-4509-aba8-b960e79992e7", 00:13:49.726 "strip_size_kb": 0, 00:13:49.726 "state": "online", 00:13:49.726 "raid_level": "raid1", 00:13:49.726 "superblock": true, 00:13:49.726 "num_base_bdevs": 2, 00:13:49.726 "num_base_bdevs_discovered": 2, 00:13:49.726 "num_base_bdevs_operational": 2, 00:13:49.726 "base_bdevs_list": [ 00:13:49.726 { 00:13:49.726 "name": "pt1", 00:13:49.726 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:49.726 "is_configured": true, 00:13:49.726 "data_offset": 2048, 00:13:49.726 "data_size": 63488 00:13:49.726 }, 00:13:49.726 { 00:13:49.726 "name": "pt2", 00:13:49.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:49.726 "is_configured": true, 00:13:49.726 "data_offset": 2048, 00:13:49.726 "data_size": 63488 00:13:49.726 } 00:13:49.726 ] 00:13:49.726 } 00:13:49.726 } 00:13:49.726 }' 00:13:49.726 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:49.726 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:49.726 pt2' 00:13:49.726 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.985 [2024-10-15 09:13:33.820369] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7628bdb2-7eaf-4509-aba8-b960e79992e7 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7628bdb2-7eaf-4509-aba8-b960e79992e7 ']' 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.985 [2024-10-15 09:13:33.872042] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:49.985 [2024-10-15 09:13:33.872231] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.985 [2024-10-15 09:13:33.872391] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.985 [2024-10-15 09:13:33.872483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.985 [2024-10-15 09:13:33.872504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.985 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.245 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:50.245 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:50.245 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:50.245 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:50.245 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.245 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.245 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.245 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:50.245 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:50.245 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.245 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.245 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.245 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:50.245 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.245 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.245 09:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:50.245 09:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.245 [2024-10-15 09:13:34.012093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:50.245 [2024-10-15 09:13:34.014811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:50.245 [2024-10-15 09:13:34.014918] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:50.245 [2024-10-15 09:13:34.015007] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:50.245 [2024-10-15 09:13:34.015035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:50.245 [2024-10-15 09:13:34.015051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:50.245 request: 00:13:50.245 { 00:13:50.245 "name": "raid_bdev1", 00:13:50.245 "raid_level": "raid1", 00:13:50.245 "base_bdevs": [ 00:13:50.245 "malloc1", 00:13:50.245 "malloc2" 00:13:50.245 ], 00:13:50.245 "superblock": false, 00:13:50.245 "method": "bdev_raid_create", 00:13:50.245 "req_id": 1 00:13:50.245 } 00:13:50.245 Got JSON-RPC error response 00:13:50.245 response: 00:13:50.245 { 00:13:50.245 "code": -17, 00:13:50.245 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:50.245 } 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.245 [2024-10-15 09:13:34.076072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:50.245 [2024-10-15 09:13:34.076177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.245 [2024-10-15 09:13:34.076208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:50.245 [2024-10-15 09:13:34.076227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.245 [2024-10-15 09:13:34.079360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.245 [2024-10-15 09:13:34.079413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:50.245 [2024-10-15 09:13:34.079541] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:50.245 [2024-10-15 09:13:34.079630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:50.245 pt1 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.245 "name": "raid_bdev1", 00:13:50.245 "uuid": "7628bdb2-7eaf-4509-aba8-b960e79992e7", 00:13:50.245 "strip_size_kb": 0, 00:13:50.245 "state": "configuring", 00:13:50.245 "raid_level": "raid1", 00:13:50.245 "superblock": true, 00:13:50.245 "num_base_bdevs": 2, 00:13:50.245 "num_base_bdevs_discovered": 1, 00:13:50.245 "num_base_bdevs_operational": 2, 00:13:50.245 "base_bdevs_list": [ 00:13:50.245 { 00:13:50.245 "name": "pt1", 00:13:50.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:50.245 "is_configured": true, 00:13:50.245 "data_offset": 2048, 00:13:50.245 "data_size": 63488 00:13:50.245 }, 00:13:50.245 { 00:13:50.245 "name": null, 00:13:50.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:50.245 "is_configured": false, 00:13:50.245 "data_offset": 2048, 00:13:50.245 "data_size": 63488 00:13:50.245 } 00:13:50.245 ] 00:13:50.245 }' 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.245 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.812 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:13:50.812 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:50.812 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:50.812 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:50.812 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.812 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.812 [2024-10-15 09:13:34.612222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:50.812 [2024-10-15 09:13:34.612464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.812 [2024-10-15 09:13:34.612510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:50.812 [2024-10-15 09:13:34.612531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.812 [2024-10-15 09:13:34.613231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.812 [2024-10-15 09:13:34.613280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:50.812 [2024-10-15 09:13:34.613399] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:50.812 [2024-10-15 09:13:34.613438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:50.812 [2024-10-15 09:13:34.613608] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:50.812 [2024-10-15 09:13:34.613630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:50.812 [2024-10-15 09:13:34.613961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:50.812 [2024-10-15 09:13:34.614207] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:50.812 [2024-10-15 09:13:34.614225] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:50.812 [2024-10-15 09:13:34.614410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.812 pt2 00:13:50.812 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.812 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:50.812 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:50.813 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:50.813 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.813 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.813 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.813 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.813 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:50.813 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.813 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.813 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.813 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.813 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.813 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.813 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.813 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.813 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.813 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.813 "name": "raid_bdev1", 00:13:50.813 "uuid": "7628bdb2-7eaf-4509-aba8-b960e79992e7", 00:13:50.813 "strip_size_kb": 0, 00:13:50.813 "state": "online", 00:13:50.813 "raid_level": "raid1", 00:13:50.813 "superblock": true, 00:13:50.813 "num_base_bdevs": 2, 00:13:50.813 "num_base_bdevs_discovered": 2, 00:13:50.813 "num_base_bdevs_operational": 2, 00:13:50.813 "base_bdevs_list": [ 00:13:50.813 { 00:13:50.813 "name": "pt1", 00:13:50.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:50.813 "is_configured": true, 00:13:50.813 "data_offset": 2048, 00:13:50.813 "data_size": 63488 00:13:50.813 }, 00:13:50.813 { 00:13:50.813 "name": "pt2", 00:13:50.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:50.813 "is_configured": true, 00:13:50.813 "data_offset": 2048, 00:13:50.813 "data_size": 63488 00:13:50.813 } 00:13:50.813 ] 00:13:50.813 }' 00:13:50.813 09:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.813 09:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.380 [2024-10-15 09:13:35.152711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:51.380 "name": "raid_bdev1", 00:13:51.380 "aliases": [ 00:13:51.380 "7628bdb2-7eaf-4509-aba8-b960e79992e7" 00:13:51.380 ], 00:13:51.380 "product_name": "Raid Volume", 00:13:51.380 "block_size": 512, 00:13:51.380 "num_blocks": 63488, 00:13:51.380 "uuid": "7628bdb2-7eaf-4509-aba8-b960e79992e7", 00:13:51.380 "assigned_rate_limits": { 00:13:51.380 "rw_ios_per_sec": 0, 00:13:51.380 "rw_mbytes_per_sec": 0, 00:13:51.380 "r_mbytes_per_sec": 0, 00:13:51.380 "w_mbytes_per_sec": 0 00:13:51.380 }, 00:13:51.380 "claimed": false, 00:13:51.380 "zoned": false, 00:13:51.380 "supported_io_types": { 00:13:51.380 "read": true, 00:13:51.380 "write": true, 00:13:51.380 "unmap": false, 00:13:51.380 "flush": false, 00:13:51.380 "reset": true, 00:13:51.380 "nvme_admin": false, 00:13:51.380 "nvme_io": false, 00:13:51.380 "nvme_io_md": false, 00:13:51.380 "write_zeroes": true, 00:13:51.380 "zcopy": false, 00:13:51.380 "get_zone_info": false, 00:13:51.380 "zone_management": false, 00:13:51.380 "zone_append": false, 00:13:51.380 "compare": false, 00:13:51.380 "compare_and_write": false, 00:13:51.380 "abort": false, 00:13:51.380 "seek_hole": false, 00:13:51.380 "seek_data": false, 00:13:51.380 "copy": false, 00:13:51.380 "nvme_iov_md": false 00:13:51.380 }, 00:13:51.380 "memory_domains": [ 00:13:51.380 { 00:13:51.380 "dma_device_id": "system", 00:13:51.380 "dma_device_type": 1 00:13:51.380 }, 00:13:51.380 { 00:13:51.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.380 "dma_device_type": 2 00:13:51.380 }, 00:13:51.380 { 00:13:51.380 "dma_device_id": "system", 00:13:51.380 "dma_device_type": 1 00:13:51.380 }, 00:13:51.380 { 00:13:51.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.380 "dma_device_type": 2 00:13:51.380 } 00:13:51.380 ], 00:13:51.380 "driver_specific": { 00:13:51.380 "raid": { 00:13:51.380 "uuid": "7628bdb2-7eaf-4509-aba8-b960e79992e7", 00:13:51.380 "strip_size_kb": 0, 00:13:51.380 "state": "online", 00:13:51.380 "raid_level": "raid1", 00:13:51.380 "superblock": true, 00:13:51.380 "num_base_bdevs": 2, 00:13:51.380 "num_base_bdevs_discovered": 2, 00:13:51.380 "num_base_bdevs_operational": 2, 00:13:51.380 "base_bdevs_list": [ 00:13:51.380 { 00:13:51.380 "name": "pt1", 00:13:51.380 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:51.380 "is_configured": true, 00:13:51.380 "data_offset": 2048, 00:13:51.380 "data_size": 63488 00:13:51.380 }, 00:13:51.380 { 00:13:51.380 "name": "pt2", 00:13:51.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:51.380 "is_configured": true, 00:13:51.380 "data_offset": 2048, 00:13:51.380 "data_size": 63488 00:13:51.380 } 00:13:51.380 ] 00:13:51.380 } 00:13:51.380 } 00:13:51.380 }' 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:51.380 pt2' 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.380 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.639 [2024-10-15 09:13:35.424784] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7628bdb2-7eaf-4509-aba8-b960e79992e7 '!=' 7628bdb2-7eaf-4509-aba8-b960e79992e7 ']' 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.639 [2024-10-15 09:13:35.464567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.639 "name": "raid_bdev1", 00:13:51.639 "uuid": "7628bdb2-7eaf-4509-aba8-b960e79992e7", 00:13:51.639 "strip_size_kb": 0, 00:13:51.639 "state": "online", 00:13:51.639 "raid_level": "raid1", 00:13:51.639 "superblock": true, 00:13:51.639 "num_base_bdevs": 2, 00:13:51.639 "num_base_bdevs_discovered": 1, 00:13:51.639 "num_base_bdevs_operational": 1, 00:13:51.639 "base_bdevs_list": [ 00:13:51.639 { 00:13:51.639 "name": null, 00:13:51.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.639 "is_configured": false, 00:13:51.639 "data_offset": 0, 00:13:51.639 "data_size": 63488 00:13:51.639 }, 00:13:51.639 { 00:13:51.639 "name": "pt2", 00:13:51.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:51.639 "is_configured": true, 00:13:51.639 "data_offset": 2048, 00:13:51.639 "data_size": 63488 00:13:51.639 } 00:13:51.639 ] 00:13:51.639 }' 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.639 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.207 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:52.207 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.207 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.207 [2024-10-15 09:13:35.964601] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:52.207 [2024-10-15 09:13:35.964642] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:52.207 [2024-10-15 09:13:35.964757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.207 [2024-10-15 09:13:35.964830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:52.207 [2024-10-15 09:13:35.964850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:52.207 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.207 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.207 09:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:52.207 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.207 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.207 09:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.207 [2024-10-15 09:13:36.032640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:52.207 [2024-10-15 09:13:36.032729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.207 [2024-10-15 09:13:36.032758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:52.207 [2024-10-15 09:13:36.032776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.207 [2024-10-15 09:13:36.035977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.207 [2024-10-15 09:13:36.036031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:52.207 [2024-10-15 09:13:36.036177] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:52.207 [2024-10-15 09:13:36.036248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:52.207 [2024-10-15 09:13:36.036401] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:52.207 [2024-10-15 09:13:36.036424] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:52.207 [2024-10-15 09:13:36.036737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:52.207 [2024-10-15 09:13:36.036946] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:52.207 [2024-10-15 09:13:36.036962] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:52.207 [2024-10-15 09:13:36.037226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.207 pt2 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.207 "name": "raid_bdev1", 00:13:52.207 "uuid": "7628bdb2-7eaf-4509-aba8-b960e79992e7", 00:13:52.207 "strip_size_kb": 0, 00:13:52.207 "state": "online", 00:13:52.207 "raid_level": "raid1", 00:13:52.207 "superblock": true, 00:13:52.207 "num_base_bdevs": 2, 00:13:52.207 "num_base_bdevs_discovered": 1, 00:13:52.207 "num_base_bdevs_operational": 1, 00:13:52.207 "base_bdevs_list": [ 00:13:52.207 { 00:13:52.207 "name": null, 00:13:52.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.207 "is_configured": false, 00:13:52.207 "data_offset": 2048, 00:13:52.207 "data_size": 63488 00:13:52.207 }, 00:13:52.207 { 00:13:52.207 "name": "pt2", 00:13:52.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:52.207 "is_configured": true, 00:13:52.207 "data_offset": 2048, 00:13:52.207 "data_size": 63488 00:13:52.207 } 00:13:52.207 ] 00:13:52.207 }' 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.207 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.775 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:52.775 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.775 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.775 [2024-10-15 09:13:36.561320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:52.775 [2024-10-15 09:13:36.561382] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:52.775 [2024-10-15 09:13:36.561501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.775 [2024-10-15 09:13:36.561580] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:52.775 [2024-10-15 09:13:36.561597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:52.775 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.775 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.775 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:52.775 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.775 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.775 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.775 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:52.775 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:52.775 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:13:52.775 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:52.775 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.775 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.775 [2024-10-15 09:13:36.621363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:52.775 [2024-10-15 09:13:36.621464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.775 [2024-10-15 09:13:36.621499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:52.775 [2024-10-15 09:13:36.621515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.775 [2024-10-15 09:13:36.624730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.775 [2024-10-15 09:13:36.624781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:52.775 [2024-10-15 09:13:36.624919] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:52.775 [2024-10-15 09:13:36.624985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:52.776 [2024-10-15 09:13:36.625188] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:52.776 [2024-10-15 09:13:36.625207] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:52.776 [2024-10-15 09:13:36.625232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:52.776 [2024-10-15 09:13:36.625307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:52.776 [2024-10-15 09:13:36.625477] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:52.776 [2024-10-15 09:13:36.625493] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:52.776 pt1 00:13:52.776 [2024-10-15 09:13:36.625824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:52.776 [2024-10-15 09:13:36.626033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:52.776 [2024-10-15 09:13:36.626054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:52.776 [2024-10-15 09:13:36.626268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.776 "name": "raid_bdev1", 00:13:52.776 "uuid": "7628bdb2-7eaf-4509-aba8-b960e79992e7", 00:13:52.776 "strip_size_kb": 0, 00:13:52.776 "state": "online", 00:13:52.776 "raid_level": "raid1", 00:13:52.776 "superblock": true, 00:13:52.776 "num_base_bdevs": 2, 00:13:52.776 "num_base_bdevs_discovered": 1, 00:13:52.776 "num_base_bdevs_operational": 1, 00:13:52.776 "base_bdevs_list": [ 00:13:52.776 { 00:13:52.776 "name": null, 00:13:52.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.776 "is_configured": false, 00:13:52.776 "data_offset": 2048, 00:13:52.776 "data_size": 63488 00:13:52.776 }, 00:13:52.776 { 00:13:52.776 "name": "pt2", 00:13:52.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:52.776 "is_configured": true, 00:13:52.776 "data_offset": 2048, 00:13:52.776 "data_size": 63488 00:13:52.776 } 00:13:52.776 ] 00:13:52.776 }' 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.776 09:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.343 [2024-10-15 09:13:37.181793] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7628bdb2-7eaf-4509-aba8-b960e79992e7 '!=' 7628bdb2-7eaf-4509-aba8-b960e79992e7 ']' 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63373 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 63373 ']' 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 63373 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63373 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:53.343 killing process with pid 63373 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63373' 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 63373 00:13:53.343 [2024-10-15 09:13:37.255632] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.343 09:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 63373 00:13:53.343 [2024-10-15 09:13:37.255768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.343 [2024-10-15 09:13:37.255842] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.343 [2024-10-15 09:13:37.255866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:53.602 [2024-10-15 09:13:37.458737] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.979 09:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:54.979 00:13:54.979 real 0m6.782s 00:13:54.979 user 0m10.603s 00:13:54.979 sys 0m1.059s 00:13:54.979 09:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:54.979 ************************************ 00:13:54.979 END TEST raid_superblock_test 00:13:54.979 ************************************ 00:13:54.979 09:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.979 09:13:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:13:54.979 09:13:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:54.979 09:13:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:54.979 09:13:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:54.979 ************************************ 00:13:54.979 START TEST raid_read_error_test 00:13:54.979 ************************************ 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dNuDrEYjGo 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63708 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63708 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 63708 ']' 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:54.979 09:13:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.979 [2024-10-15 09:13:38.744629] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:13:54.979 [2024-10-15 09:13:38.744869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63708 ] 00:13:55.238 [2024-10-15 09:13:38.926258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.238 [2024-10-15 09:13:39.095981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.497 [2024-10-15 09:13:39.317276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.497 [2024-10-15 09:13:39.317349] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.065 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.065 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:56.065 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:56.065 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:56.065 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.065 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.065 BaseBdev1_malloc 00:13:56.065 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.065 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:56.065 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.065 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.065 true 00:13:56.065 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.065 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:56.065 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.065 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.065 [2024-10-15 09:13:39.862227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:56.065 [2024-10-15 09:13:39.862312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.065 [2024-10-15 09:13:39.862350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:56.065 [2024-10-15 09:13:39.862371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.065 [2024-10-15 09:13:39.865517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.065 [2024-10-15 09:13:39.865567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:56.065 BaseBdev1 00:13:56.065 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.066 BaseBdev2_malloc 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.066 true 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.066 [2024-10-15 09:13:39.933831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:56.066 [2024-10-15 09:13:39.933904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.066 [2024-10-15 09:13:39.933935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:56.066 [2024-10-15 09:13:39.933966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.066 [2024-10-15 09:13:39.936956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.066 [2024-10-15 09:13:39.937004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:56.066 BaseBdev2 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.066 [2024-10-15 09:13:39.946017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.066 [2024-10-15 09:13:39.948660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.066 [2024-10-15 09:13:39.948951] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:56.066 [2024-10-15 09:13:39.948975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:56.066 [2024-10-15 09:13:39.949349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:56.066 [2024-10-15 09:13:39.949597] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:56.066 [2024-10-15 09:13:39.949614] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:56.066 [2024-10-15 09:13:39.949897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.066 09:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.325 09:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.325 "name": "raid_bdev1", 00:13:56.325 "uuid": "9c8505d2-8ecc-495e-919d-f464a3f93bbb", 00:13:56.325 "strip_size_kb": 0, 00:13:56.325 "state": "online", 00:13:56.325 "raid_level": "raid1", 00:13:56.325 "superblock": true, 00:13:56.325 "num_base_bdevs": 2, 00:13:56.325 "num_base_bdevs_discovered": 2, 00:13:56.325 "num_base_bdevs_operational": 2, 00:13:56.325 "base_bdevs_list": [ 00:13:56.325 { 00:13:56.325 "name": "BaseBdev1", 00:13:56.325 "uuid": "513d186c-6e09-5abc-aae4-c7badd034529", 00:13:56.325 "is_configured": true, 00:13:56.325 "data_offset": 2048, 00:13:56.325 "data_size": 63488 00:13:56.325 }, 00:13:56.325 { 00:13:56.325 "name": "BaseBdev2", 00:13:56.325 "uuid": "edf9a09b-61f9-5437-ac7c-cf92d0ef26db", 00:13:56.325 "is_configured": true, 00:13:56.325 "data_offset": 2048, 00:13:56.325 "data_size": 63488 00:13:56.325 } 00:13:56.325 ] 00:13:56.325 }' 00:13:56.325 09:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.325 09:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.582 09:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:56.582 09:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:56.839 [2024-10-15 09:13:40.595692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.774 "name": "raid_bdev1", 00:13:57.774 "uuid": "9c8505d2-8ecc-495e-919d-f464a3f93bbb", 00:13:57.774 "strip_size_kb": 0, 00:13:57.774 "state": "online", 00:13:57.774 "raid_level": "raid1", 00:13:57.774 "superblock": true, 00:13:57.774 "num_base_bdevs": 2, 00:13:57.774 "num_base_bdevs_discovered": 2, 00:13:57.774 "num_base_bdevs_operational": 2, 00:13:57.774 "base_bdevs_list": [ 00:13:57.774 { 00:13:57.774 "name": "BaseBdev1", 00:13:57.774 "uuid": "513d186c-6e09-5abc-aae4-c7badd034529", 00:13:57.774 "is_configured": true, 00:13:57.774 "data_offset": 2048, 00:13:57.774 "data_size": 63488 00:13:57.774 }, 00:13:57.774 { 00:13:57.774 "name": "BaseBdev2", 00:13:57.774 "uuid": "edf9a09b-61f9-5437-ac7c-cf92d0ef26db", 00:13:57.774 "is_configured": true, 00:13:57.774 "data_offset": 2048, 00:13:57.774 "data_size": 63488 00:13:57.774 } 00:13:57.774 ] 00:13:57.774 }' 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.774 09:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.343 09:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:58.343 09:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.343 09:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.343 [2024-10-15 09:13:42.009428] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:58.343 [2024-10-15 09:13:42.009481] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:58.343 [2024-10-15 09:13:42.012782] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.343 [2024-10-15 09:13:42.012848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.343 [2024-10-15 09:13:42.012968] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.343 [2024-10-15 09:13:42.012989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:58.343 { 00:13:58.343 "results": [ 00:13:58.343 { 00:13:58.343 "job": "raid_bdev1", 00:13:58.343 "core_mask": "0x1", 00:13:58.343 "workload": "randrw", 00:13:58.343 "percentage": 50, 00:13:58.343 "status": "finished", 00:13:58.343 "queue_depth": 1, 00:13:58.343 "io_size": 131072, 00:13:58.343 "runtime": 1.411169, 00:13:58.343 "iops": 10747.118169404232, 00:13:58.343 "mibps": 1343.389771175529, 00:13:58.343 "io_failed": 0, 00:13:58.343 "io_timeout": 0, 00:13:58.343 "avg_latency_us": 88.91683358709075, 00:13:58.343 "min_latency_us": 43.75272727272727, 00:13:58.343 "max_latency_us": 1899.0545454545454 00:13:58.343 } 00:13:58.343 ], 00:13:58.343 "core_count": 1 00:13:58.343 } 00:13:58.343 09:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.343 09:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63708 00:13:58.343 09:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 63708 ']' 00:13:58.343 09:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 63708 00:13:58.343 09:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:13:58.343 09:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:58.343 09:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63708 00:13:58.343 09:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:58.343 09:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:58.343 killing process with pid 63708 00:13:58.343 09:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63708' 00:13:58.343 09:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 63708 00:13:58.343 [2024-10-15 09:13:42.052592] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.343 09:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 63708 00:13:58.343 [2024-10-15 09:13:42.184433] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:59.720 09:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dNuDrEYjGo 00:13:59.720 09:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:59.720 09:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:59.720 09:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:59.720 09:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:59.720 09:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:59.720 09:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:59.720 09:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:59.720 00:13:59.720 real 0m4.737s 00:13:59.720 user 0m5.909s 00:13:59.720 sys 0m0.647s 00:13:59.720 09:13:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.720 09:13:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.720 ************************************ 00:13:59.720 END TEST raid_read_error_test 00:13:59.720 ************************************ 00:13:59.720 09:13:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:13:59.720 09:13:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:59.720 09:13:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.720 09:13:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:59.720 ************************************ 00:13:59.720 START TEST raid_write_error_test 00:13:59.720 ************************************ 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qoldlYnXOe 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63855 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63855 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 63855 ']' 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:59.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:59.720 09:13:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.720 [2024-10-15 09:13:43.535974] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:13:59.720 [2024-10-15 09:13:43.536225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63855 ] 00:13:59.978 [2024-10-15 09:13:43.708859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.978 [2024-10-15 09:13:43.854529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.236 [2024-10-15 09:13:44.077492] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.236 [2024-10-15 09:13:44.077572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.801 BaseBdev1_malloc 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.801 true 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.801 [2024-10-15 09:13:44.625234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:00.801 [2024-10-15 09:13:44.625323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.801 [2024-10-15 09:13:44.625360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:00.801 [2024-10-15 09:13:44.625381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.801 [2024-10-15 09:13:44.628517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.801 [2024-10-15 09:13:44.628573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:00.801 BaseBdev1 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.801 BaseBdev2_malloc 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.801 true 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.801 [2024-10-15 09:13:44.693194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:00.801 [2024-10-15 09:13:44.693273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.801 [2024-10-15 09:13:44.693303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:00.801 [2024-10-15 09:13:44.693323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.801 [2024-10-15 09:13:44.696426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.801 [2024-10-15 09:13:44.696479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:00.801 BaseBdev2 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.801 [2024-10-15 09:13:44.705512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:00.801 [2024-10-15 09:13:44.708250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.801 [2024-10-15 09:13:44.708576] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:00.801 [2024-10-15 09:13:44.708619] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:00.801 [2024-10-15 09:13:44.708998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:00.801 [2024-10-15 09:13:44.709293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:00.801 [2024-10-15 09:13:44.709321] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:00.801 [2024-10-15 09:13:44.709617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.801 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.059 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.059 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.059 "name": "raid_bdev1", 00:14:01.059 "uuid": "dc9e2492-2078-4631-baa6-d27b9a2b14a5", 00:14:01.059 "strip_size_kb": 0, 00:14:01.059 "state": "online", 00:14:01.059 "raid_level": "raid1", 00:14:01.059 "superblock": true, 00:14:01.059 "num_base_bdevs": 2, 00:14:01.059 "num_base_bdevs_discovered": 2, 00:14:01.059 "num_base_bdevs_operational": 2, 00:14:01.059 "base_bdevs_list": [ 00:14:01.059 { 00:14:01.059 "name": "BaseBdev1", 00:14:01.059 "uuid": "0a2356d5-367d-57b0-a8ae-31910ac25cd2", 00:14:01.059 "is_configured": true, 00:14:01.059 "data_offset": 2048, 00:14:01.059 "data_size": 63488 00:14:01.059 }, 00:14:01.059 { 00:14:01.059 "name": "BaseBdev2", 00:14:01.059 "uuid": "6e4d1d0d-8480-5fa5-a8b4-c8001a2bceaf", 00:14:01.059 "is_configured": true, 00:14:01.059 "data_offset": 2048, 00:14:01.059 "data_size": 63488 00:14:01.059 } 00:14:01.059 ] 00:14:01.059 }' 00:14:01.059 09:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.059 09:13:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.316 09:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:01.316 09:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:01.573 [2024-10-15 09:13:45.375258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.507 [2024-10-15 09:13:46.250156] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:02.507 [2024-10-15 09:13:46.250246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:02.507 [2024-10-15 09:13:46.250491] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.507 "name": "raid_bdev1", 00:14:02.507 "uuid": "dc9e2492-2078-4631-baa6-d27b9a2b14a5", 00:14:02.507 "strip_size_kb": 0, 00:14:02.507 "state": "online", 00:14:02.507 "raid_level": "raid1", 00:14:02.507 "superblock": true, 00:14:02.507 "num_base_bdevs": 2, 00:14:02.507 "num_base_bdevs_discovered": 1, 00:14:02.507 "num_base_bdevs_operational": 1, 00:14:02.507 "base_bdevs_list": [ 00:14:02.507 { 00:14:02.507 "name": null, 00:14:02.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.507 "is_configured": false, 00:14:02.507 "data_offset": 0, 00:14:02.507 "data_size": 63488 00:14:02.507 }, 00:14:02.507 { 00:14:02.507 "name": "BaseBdev2", 00:14:02.507 "uuid": "6e4d1d0d-8480-5fa5-a8b4-c8001a2bceaf", 00:14:02.507 "is_configured": true, 00:14:02.507 "data_offset": 2048, 00:14:02.507 "data_size": 63488 00:14:02.507 } 00:14:02.507 ] 00:14:02.507 }' 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.507 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.110 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:03.110 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.110 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.110 [2024-10-15 09:13:46.787400] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.110 [2024-10-15 09:13:46.787449] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.110 [2024-10-15 09:13:46.790795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.110 [2024-10-15 09:13:46.790859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.111 [2024-10-15 09:13:46.790946] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.111 [2024-10-15 09:13:46.790963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:03.111 { 00:14:03.111 "results": [ 00:14:03.111 { 00:14:03.111 "job": "raid_bdev1", 00:14:03.111 "core_mask": "0x1", 00:14:03.111 "workload": "randrw", 00:14:03.111 "percentage": 50, 00:14:03.111 "status": "finished", 00:14:03.111 "queue_depth": 1, 00:14:03.111 "io_size": 131072, 00:14:03.111 "runtime": 1.40952, 00:14:03.111 "iops": 12155.911232192519, 00:14:03.111 "mibps": 1519.4889040240648, 00:14:03.111 "io_failed": 0, 00:14:03.111 "io_timeout": 0, 00:14:03.111 "avg_latency_us": 77.97790506913421, 00:14:03.111 "min_latency_us": 44.21818181818182, 00:14:03.111 "max_latency_us": 1846.9236363636364 00:14:03.111 } 00:14:03.111 ], 00:14:03.111 "core_count": 1 00:14:03.111 } 00:14:03.111 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.111 09:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63855 00:14:03.111 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 63855 ']' 00:14:03.111 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 63855 00:14:03.111 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:14:03.111 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:03.111 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63855 00:14:03.111 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:03.111 killing process with pid 63855 00:14:03.111 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:03.111 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63855' 00:14:03.111 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 63855 00:14:03.111 09:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 63855 00:14:03.111 [2024-10-15 09:13:46.834301] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:03.111 [2024-10-15 09:13:46.967071] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:04.484 09:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qoldlYnXOe 00:14:04.484 09:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:04.484 09:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:04.484 09:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:04.484 09:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:04.484 09:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:04.484 09:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:04.484 09:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:04.484 00:14:04.484 real 0m4.741s 00:14:04.484 user 0m5.906s 00:14:04.484 sys 0m0.638s 00:14:04.484 09:13:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.484 09:13:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.484 ************************************ 00:14:04.484 END TEST raid_write_error_test 00:14:04.484 ************************************ 00:14:04.484 09:13:48 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:14:04.484 09:13:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:04.484 09:13:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:14:04.484 09:13:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:04.484 09:13:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.484 09:13:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:04.484 ************************************ 00:14:04.484 START TEST raid_state_function_test 00:14:04.484 ************************************ 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:04.484 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:04.485 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63999 00:14:04.485 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:04.485 Process raid pid: 63999 00:14:04.485 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63999' 00:14:04.485 09:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63999 00:14:04.485 09:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 63999 ']' 00:14:04.485 09:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.485 09:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:04.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.485 09:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.485 09:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:04.485 09:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.485 [2024-10-15 09:13:48.340691] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:14:04.485 [2024-10-15 09:13:48.340967] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.742 [2024-10-15 09:13:48.533735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.001 [2024-10-15 09:13:48.746346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.258 [2024-10-15 09:13:48.988238] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.258 [2024-10-15 09:13:48.988314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.516 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:05.516 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:05.516 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:05.516 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.516 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.516 [2024-10-15 09:13:49.402084] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:05.516 [2024-10-15 09:13:49.402167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:05.516 [2024-10-15 09:13:49.402186] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:05.516 [2024-10-15 09:13:49.402202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:05.516 [2024-10-15 09:13:49.402212] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:05.516 [2024-10-15 09:13:49.402227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:05.516 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.516 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:05.516 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.516 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.516 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:05.516 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.516 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.516 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.516 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.516 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.517 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.517 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.517 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.517 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.517 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.517 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.774 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.774 "name": "Existed_Raid", 00:14:05.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.774 "strip_size_kb": 64, 00:14:05.774 "state": "configuring", 00:14:05.774 "raid_level": "raid0", 00:14:05.774 "superblock": false, 00:14:05.774 "num_base_bdevs": 3, 00:14:05.774 "num_base_bdevs_discovered": 0, 00:14:05.774 "num_base_bdevs_operational": 3, 00:14:05.774 "base_bdevs_list": [ 00:14:05.774 { 00:14:05.774 "name": "BaseBdev1", 00:14:05.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.774 "is_configured": false, 00:14:05.774 "data_offset": 0, 00:14:05.774 "data_size": 0 00:14:05.774 }, 00:14:05.774 { 00:14:05.774 "name": "BaseBdev2", 00:14:05.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.774 "is_configured": false, 00:14:05.774 "data_offset": 0, 00:14:05.774 "data_size": 0 00:14:05.774 }, 00:14:05.774 { 00:14:05.774 "name": "BaseBdev3", 00:14:05.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.774 "is_configured": false, 00:14:05.774 "data_offset": 0, 00:14:05.774 "data_size": 0 00:14:05.774 } 00:14:05.774 ] 00:14:05.774 }' 00:14:05.774 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.774 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.033 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:06.033 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.033 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.033 [2024-10-15 09:13:49.910151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:06.033 [2024-10-15 09:13:49.910204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:06.033 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.033 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:06.033 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.033 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.033 [2024-10-15 09:13:49.918153] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:06.033 [2024-10-15 09:13:49.918211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:06.033 [2024-10-15 09:13:49.918227] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:06.033 [2024-10-15 09:13:49.918243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:06.033 [2024-10-15 09:13:49.918252] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:06.033 [2024-10-15 09:13:49.918267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:06.033 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.033 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:06.033 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.033 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.292 [2024-10-15 09:13:49.966420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:06.292 BaseBdev1 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.292 [ 00:14:06.292 { 00:14:06.292 "name": "BaseBdev1", 00:14:06.292 "aliases": [ 00:14:06.292 "33169e96-b6e8-445f-ad7b-8a37005042e5" 00:14:06.292 ], 00:14:06.292 "product_name": "Malloc disk", 00:14:06.292 "block_size": 512, 00:14:06.292 "num_blocks": 65536, 00:14:06.292 "uuid": "33169e96-b6e8-445f-ad7b-8a37005042e5", 00:14:06.292 "assigned_rate_limits": { 00:14:06.292 "rw_ios_per_sec": 0, 00:14:06.292 "rw_mbytes_per_sec": 0, 00:14:06.292 "r_mbytes_per_sec": 0, 00:14:06.292 "w_mbytes_per_sec": 0 00:14:06.292 }, 00:14:06.292 "claimed": true, 00:14:06.292 "claim_type": "exclusive_write", 00:14:06.292 "zoned": false, 00:14:06.292 "supported_io_types": { 00:14:06.292 "read": true, 00:14:06.292 "write": true, 00:14:06.292 "unmap": true, 00:14:06.292 "flush": true, 00:14:06.292 "reset": true, 00:14:06.292 "nvme_admin": false, 00:14:06.292 "nvme_io": false, 00:14:06.292 "nvme_io_md": false, 00:14:06.292 "write_zeroes": true, 00:14:06.292 "zcopy": true, 00:14:06.292 "get_zone_info": false, 00:14:06.292 "zone_management": false, 00:14:06.292 "zone_append": false, 00:14:06.292 "compare": false, 00:14:06.292 "compare_and_write": false, 00:14:06.292 "abort": true, 00:14:06.292 "seek_hole": false, 00:14:06.292 "seek_data": false, 00:14:06.292 "copy": true, 00:14:06.292 "nvme_iov_md": false 00:14:06.292 }, 00:14:06.292 "memory_domains": [ 00:14:06.292 { 00:14:06.292 "dma_device_id": "system", 00:14:06.292 "dma_device_type": 1 00:14:06.292 }, 00:14:06.292 { 00:14:06.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.292 "dma_device_type": 2 00:14:06.292 } 00:14:06.292 ], 00:14:06.292 "driver_specific": {} 00:14:06.292 } 00:14:06.292 ] 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.292 09:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.292 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.292 09:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.292 09:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.292 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.292 09:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.292 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.292 "name": "Existed_Raid", 00:14:06.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.292 "strip_size_kb": 64, 00:14:06.292 "state": "configuring", 00:14:06.292 "raid_level": "raid0", 00:14:06.292 "superblock": false, 00:14:06.292 "num_base_bdevs": 3, 00:14:06.292 "num_base_bdevs_discovered": 1, 00:14:06.292 "num_base_bdevs_operational": 3, 00:14:06.292 "base_bdevs_list": [ 00:14:06.292 { 00:14:06.292 "name": "BaseBdev1", 00:14:06.292 "uuid": "33169e96-b6e8-445f-ad7b-8a37005042e5", 00:14:06.292 "is_configured": true, 00:14:06.292 "data_offset": 0, 00:14:06.292 "data_size": 65536 00:14:06.292 }, 00:14:06.292 { 00:14:06.292 "name": "BaseBdev2", 00:14:06.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.292 "is_configured": false, 00:14:06.292 "data_offset": 0, 00:14:06.292 "data_size": 0 00:14:06.292 }, 00:14:06.292 { 00:14:06.292 "name": "BaseBdev3", 00:14:06.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.292 "is_configured": false, 00:14:06.292 "data_offset": 0, 00:14:06.292 "data_size": 0 00:14:06.292 } 00:14:06.292 ] 00:14:06.292 }' 00:14:06.292 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.292 09:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.868 [2024-10-15 09:13:50.494631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:06.868 [2024-10-15 09:13:50.494710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.868 [2024-10-15 09:13:50.502708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:06.868 [2024-10-15 09:13:50.505302] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:06.868 [2024-10-15 09:13:50.505361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:06.868 [2024-10-15 09:13:50.505379] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:06.868 [2024-10-15 09:13:50.505394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.868 "name": "Existed_Raid", 00:14:06.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.868 "strip_size_kb": 64, 00:14:06.868 "state": "configuring", 00:14:06.868 "raid_level": "raid0", 00:14:06.868 "superblock": false, 00:14:06.868 "num_base_bdevs": 3, 00:14:06.868 "num_base_bdevs_discovered": 1, 00:14:06.868 "num_base_bdevs_operational": 3, 00:14:06.868 "base_bdevs_list": [ 00:14:06.868 { 00:14:06.868 "name": "BaseBdev1", 00:14:06.868 "uuid": "33169e96-b6e8-445f-ad7b-8a37005042e5", 00:14:06.868 "is_configured": true, 00:14:06.868 "data_offset": 0, 00:14:06.868 "data_size": 65536 00:14:06.868 }, 00:14:06.868 { 00:14:06.868 "name": "BaseBdev2", 00:14:06.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.868 "is_configured": false, 00:14:06.868 "data_offset": 0, 00:14:06.868 "data_size": 0 00:14:06.868 }, 00:14:06.868 { 00:14:06.868 "name": "BaseBdev3", 00:14:06.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.868 "is_configured": false, 00:14:06.868 "data_offset": 0, 00:14:06.868 "data_size": 0 00:14:06.868 } 00:14:06.868 ] 00:14:06.868 }' 00:14:06.868 09:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.869 09:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.127 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:07.127 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.127 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.386 [2024-10-15 09:13:51.060766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:07.386 BaseBdev2 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.386 [ 00:14:07.386 { 00:14:07.386 "name": "BaseBdev2", 00:14:07.386 "aliases": [ 00:14:07.386 "1ce943b1-6faf-441f-a4e5-cdd36fe33d1a" 00:14:07.386 ], 00:14:07.386 "product_name": "Malloc disk", 00:14:07.386 "block_size": 512, 00:14:07.386 "num_blocks": 65536, 00:14:07.386 "uuid": "1ce943b1-6faf-441f-a4e5-cdd36fe33d1a", 00:14:07.386 "assigned_rate_limits": { 00:14:07.386 "rw_ios_per_sec": 0, 00:14:07.386 "rw_mbytes_per_sec": 0, 00:14:07.386 "r_mbytes_per_sec": 0, 00:14:07.386 "w_mbytes_per_sec": 0 00:14:07.386 }, 00:14:07.386 "claimed": true, 00:14:07.386 "claim_type": "exclusive_write", 00:14:07.386 "zoned": false, 00:14:07.386 "supported_io_types": { 00:14:07.386 "read": true, 00:14:07.386 "write": true, 00:14:07.386 "unmap": true, 00:14:07.386 "flush": true, 00:14:07.386 "reset": true, 00:14:07.386 "nvme_admin": false, 00:14:07.386 "nvme_io": false, 00:14:07.386 "nvme_io_md": false, 00:14:07.386 "write_zeroes": true, 00:14:07.386 "zcopy": true, 00:14:07.386 "get_zone_info": false, 00:14:07.386 "zone_management": false, 00:14:07.386 "zone_append": false, 00:14:07.386 "compare": false, 00:14:07.386 "compare_and_write": false, 00:14:07.386 "abort": true, 00:14:07.386 "seek_hole": false, 00:14:07.386 "seek_data": false, 00:14:07.386 "copy": true, 00:14:07.386 "nvme_iov_md": false 00:14:07.386 }, 00:14:07.386 "memory_domains": [ 00:14:07.386 { 00:14:07.386 "dma_device_id": "system", 00:14:07.386 "dma_device_type": 1 00:14:07.386 }, 00:14:07.386 { 00:14:07.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.386 "dma_device_type": 2 00:14:07.386 } 00:14:07.386 ], 00:14:07.386 "driver_specific": {} 00:14:07.386 } 00:14:07.386 ] 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.386 "name": "Existed_Raid", 00:14:07.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.386 "strip_size_kb": 64, 00:14:07.386 "state": "configuring", 00:14:07.386 "raid_level": "raid0", 00:14:07.386 "superblock": false, 00:14:07.386 "num_base_bdevs": 3, 00:14:07.386 "num_base_bdevs_discovered": 2, 00:14:07.386 "num_base_bdevs_operational": 3, 00:14:07.386 "base_bdevs_list": [ 00:14:07.386 { 00:14:07.386 "name": "BaseBdev1", 00:14:07.386 "uuid": "33169e96-b6e8-445f-ad7b-8a37005042e5", 00:14:07.386 "is_configured": true, 00:14:07.386 "data_offset": 0, 00:14:07.386 "data_size": 65536 00:14:07.386 }, 00:14:07.386 { 00:14:07.386 "name": "BaseBdev2", 00:14:07.386 "uuid": "1ce943b1-6faf-441f-a4e5-cdd36fe33d1a", 00:14:07.386 "is_configured": true, 00:14:07.386 "data_offset": 0, 00:14:07.386 "data_size": 65536 00:14:07.386 }, 00:14:07.386 { 00:14:07.386 "name": "BaseBdev3", 00:14:07.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.386 "is_configured": false, 00:14:07.386 "data_offset": 0, 00:14:07.386 "data_size": 0 00:14:07.386 } 00:14:07.386 ] 00:14:07.386 }' 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.386 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.953 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:07.953 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.953 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.953 [2024-10-15 09:13:51.703044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:07.953 [2024-10-15 09:13:51.703162] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:07.953 [2024-10-15 09:13:51.703188] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:07.953 [2024-10-15 09:13:51.703542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:07.953 [2024-10-15 09:13:51.703776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:07.953 [2024-10-15 09:13:51.703800] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:07.953 [2024-10-15 09:13:51.704183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.953 BaseBdev3 00:14:07.953 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.954 [ 00:14:07.954 { 00:14:07.954 "name": "BaseBdev3", 00:14:07.954 "aliases": [ 00:14:07.954 "c39985d2-6ad8-4478-8fdd-0d44c2f15ab0" 00:14:07.954 ], 00:14:07.954 "product_name": "Malloc disk", 00:14:07.954 "block_size": 512, 00:14:07.954 "num_blocks": 65536, 00:14:07.954 "uuid": "c39985d2-6ad8-4478-8fdd-0d44c2f15ab0", 00:14:07.954 "assigned_rate_limits": { 00:14:07.954 "rw_ios_per_sec": 0, 00:14:07.954 "rw_mbytes_per_sec": 0, 00:14:07.954 "r_mbytes_per_sec": 0, 00:14:07.954 "w_mbytes_per_sec": 0 00:14:07.954 }, 00:14:07.954 "claimed": true, 00:14:07.954 "claim_type": "exclusive_write", 00:14:07.954 "zoned": false, 00:14:07.954 "supported_io_types": { 00:14:07.954 "read": true, 00:14:07.954 "write": true, 00:14:07.954 "unmap": true, 00:14:07.954 "flush": true, 00:14:07.954 "reset": true, 00:14:07.954 "nvme_admin": false, 00:14:07.954 "nvme_io": false, 00:14:07.954 "nvme_io_md": false, 00:14:07.954 "write_zeroes": true, 00:14:07.954 "zcopy": true, 00:14:07.954 "get_zone_info": false, 00:14:07.954 "zone_management": false, 00:14:07.954 "zone_append": false, 00:14:07.954 "compare": false, 00:14:07.954 "compare_and_write": false, 00:14:07.954 "abort": true, 00:14:07.954 "seek_hole": false, 00:14:07.954 "seek_data": false, 00:14:07.954 "copy": true, 00:14:07.954 "nvme_iov_md": false 00:14:07.954 }, 00:14:07.954 "memory_domains": [ 00:14:07.954 { 00:14:07.954 "dma_device_id": "system", 00:14:07.954 "dma_device_type": 1 00:14:07.954 }, 00:14:07.954 { 00:14:07.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.954 "dma_device_type": 2 00:14:07.954 } 00:14:07.954 ], 00:14:07.954 "driver_specific": {} 00:14:07.954 } 00:14:07.954 ] 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.954 "name": "Existed_Raid", 00:14:07.954 "uuid": "e1ee1aea-2f3c-4c2b-9c5e-61291bedb424", 00:14:07.954 "strip_size_kb": 64, 00:14:07.954 "state": "online", 00:14:07.954 "raid_level": "raid0", 00:14:07.954 "superblock": false, 00:14:07.954 "num_base_bdevs": 3, 00:14:07.954 "num_base_bdevs_discovered": 3, 00:14:07.954 "num_base_bdevs_operational": 3, 00:14:07.954 "base_bdevs_list": [ 00:14:07.954 { 00:14:07.954 "name": "BaseBdev1", 00:14:07.954 "uuid": "33169e96-b6e8-445f-ad7b-8a37005042e5", 00:14:07.954 "is_configured": true, 00:14:07.954 "data_offset": 0, 00:14:07.954 "data_size": 65536 00:14:07.954 }, 00:14:07.954 { 00:14:07.954 "name": "BaseBdev2", 00:14:07.954 "uuid": "1ce943b1-6faf-441f-a4e5-cdd36fe33d1a", 00:14:07.954 "is_configured": true, 00:14:07.954 "data_offset": 0, 00:14:07.954 "data_size": 65536 00:14:07.954 }, 00:14:07.954 { 00:14:07.954 "name": "BaseBdev3", 00:14:07.954 "uuid": "c39985d2-6ad8-4478-8fdd-0d44c2f15ab0", 00:14:07.954 "is_configured": true, 00:14:07.954 "data_offset": 0, 00:14:07.954 "data_size": 65536 00:14:07.954 } 00:14:07.954 ] 00:14:07.954 }' 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.954 09:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.521 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:08.521 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:08.521 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:08.521 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:08.521 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:08.521 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:08.521 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:08.521 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.521 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.521 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:08.521 [2024-10-15 09:13:52.251681] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.521 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.521 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:08.521 "name": "Existed_Raid", 00:14:08.521 "aliases": [ 00:14:08.521 "e1ee1aea-2f3c-4c2b-9c5e-61291bedb424" 00:14:08.521 ], 00:14:08.521 "product_name": "Raid Volume", 00:14:08.521 "block_size": 512, 00:14:08.521 "num_blocks": 196608, 00:14:08.521 "uuid": "e1ee1aea-2f3c-4c2b-9c5e-61291bedb424", 00:14:08.521 "assigned_rate_limits": { 00:14:08.521 "rw_ios_per_sec": 0, 00:14:08.521 "rw_mbytes_per_sec": 0, 00:14:08.521 "r_mbytes_per_sec": 0, 00:14:08.521 "w_mbytes_per_sec": 0 00:14:08.521 }, 00:14:08.521 "claimed": false, 00:14:08.521 "zoned": false, 00:14:08.521 "supported_io_types": { 00:14:08.522 "read": true, 00:14:08.522 "write": true, 00:14:08.522 "unmap": true, 00:14:08.522 "flush": true, 00:14:08.522 "reset": true, 00:14:08.522 "nvme_admin": false, 00:14:08.522 "nvme_io": false, 00:14:08.522 "nvme_io_md": false, 00:14:08.522 "write_zeroes": true, 00:14:08.522 "zcopy": false, 00:14:08.522 "get_zone_info": false, 00:14:08.522 "zone_management": false, 00:14:08.522 "zone_append": false, 00:14:08.522 "compare": false, 00:14:08.522 "compare_and_write": false, 00:14:08.522 "abort": false, 00:14:08.522 "seek_hole": false, 00:14:08.522 "seek_data": false, 00:14:08.522 "copy": false, 00:14:08.522 "nvme_iov_md": false 00:14:08.522 }, 00:14:08.522 "memory_domains": [ 00:14:08.522 { 00:14:08.522 "dma_device_id": "system", 00:14:08.522 "dma_device_type": 1 00:14:08.522 }, 00:14:08.522 { 00:14:08.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.522 "dma_device_type": 2 00:14:08.522 }, 00:14:08.522 { 00:14:08.522 "dma_device_id": "system", 00:14:08.522 "dma_device_type": 1 00:14:08.522 }, 00:14:08.522 { 00:14:08.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.522 "dma_device_type": 2 00:14:08.522 }, 00:14:08.522 { 00:14:08.522 "dma_device_id": "system", 00:14:08.522 "dma_device_type": 1 00:14:08.522 }, 00:14:08.522 { 00:14:08.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.522 "dma_device_type": 2 00:14:08.522 } 00:14:08.522 ], 00:14:08.522 "driver_specific": { 00:14:08.522 "raid": { 00:14:08.522 "uuid": "e1ee1aea-2f3c-4c2b-9c5e-61291bedb424", 00:14:08.522 "strip_size_kb": 64, 00:14:08.522 "state": "online", 00:14:08.522 "raid_level": "raid0", 00:14:08.522 "superblock": false, 00:14:08.522 "num_base_bdevs": 3, 00:14:08.522 "num_base_bdevs_discovered": 3, 00:14:08.522 "num_base_bdevs_operational": 3, 00:14:08.522 "base_bdevs_list": [ 00:14:08.522 { 00:14:08.522 "name": "BaseBdev1", 00:14:08.522 "uuid": "33169e96-b6e8-445f-ad7b-8a37005042e5", 00:14:08.522 "is_configured": true, 00:14:08.522 "data_offset": 0, 00:14:08.522 "data_size": 65536 00:14:08.522 }, 00:14:08.522 { 00:14:08.522 "name": "BaseBdev2", 00:14:08.522 "uuid": "1ce943b1-6faf-441f-a4e5-cdd36fe33d1a", 00:14:08.522 "is_configured": true, 00:14:08.522 "data_offset": 0, 00:14:08.522 "data_size": 65536 00:14:08.522 }, 00:14:08.522 { 00:14:08.522 "name": "BaseBdev3", 00:14:08.522 "uuid": "c39985d2-6ad8-4478-8fdd-0d44c2f15ab0", 00:14:08.522 "is_configured": true, 00:14:08.522 "data_offset": 0, 00:14:08.522 "data_size": 65536 00:14:08.522 } 00:14:08.522 ] 00:14:08.522 } 00:14:08.522 } 00:14:08.522 }' 00:14:08.522 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:08.522 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:08.522 BaseBdev2 00:14:08.522 BaseBdev3' 00:14:08.522 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.522 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:08.522 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.522 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:08.522 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.522 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.522 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.522 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.522 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.522 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.522 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.522 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:08.522 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.522 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.522 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.780 [2024-10-15 09:13:52.555433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:08.780 [2024-10-15 09:13:52.555473] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.780 [2024-10-15 09:13:52.555550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.780 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.037 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.037 "name": "Existed_Raid", 00:14:09.037 "uuid": "e1ee1aea-2f3c-4c2b-9c5e-61291bedb424", 00:14:09.037 "strip_size_kb": 64, 00:14:09.037 "state": "offline", 00:14:09.037 "raid_level": "raid0", 00:14:09.037 "superblock": false, 00:14:09.037 "num_base_bdevs": 3, 00:14:09.037 "num_base_bdevs_discovered": 2, 00:14:09.037 "num_base_bdevs_operational": 2, 00:14:09.037 "base_bdevs_list": [ 00:14:09.037 { 00:14:09.037 "name": null, 00:14:09.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.037 "is_configured": false, 00:14:09.037 "data_offset": 0, 00:14:09.037 "data_size": 65536 00:14:09.037 }, 00:14:09.037 { 00:14:09.037 "name": "BaseBdev2", 00:14:09.037 "uuid": "1ce943b1-6faf-441f-a4e5-cdd36fe33d1a", 00:14:09.037 "is_configured": true, 00:14:09.037 "data_offset": 0, 00:14:09.037 "data_size": 65536 00:14:09.037 }, 00:14:09.037 { 00:14:09.037 "name": "BaseBdev3", 00:14:09.037 "uuid": "c39985d2-6ad8-4478-8fdd-0d44c2f15ab0", 00:14:09.037 "is_configured": true, 00:14:09.037 "data_offset": 0, 00:14:09.037 "data_size": 65536 00:14:09.037 } 00:14:09.037 ] 00:14:09.037 }' 00:14:09.037 09:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.037 09:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.295 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:09.295 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:09.295 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:09.295 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.295 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.295 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.295 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.553 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:09.553 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:09.553 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:09.553 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.553 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.553 [2024-10-15 09:13:53.245871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:09.553 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.553 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:09.553 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:09.553 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.553 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.553 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.553 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:09.553 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.553 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:09.553 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:09.553 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:09.553 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.553 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.553 [2024-10-15 09:13:53.415330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:09.553 [2024-10-15 09:13:53.415426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.812 BaseBdev2 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.812 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.813 [ 00:14:09.813 { 00:14:09.813 "name": "BaseBdev2", 00:14:09.813 "aliases": [ 00:14:09.813 "9bacbfb0-8b28-4e91-b2b8-f5566e743ba3" 00:14:09.813 ], 00:14:09.813 "product_name": "Malloc disk", 00:14:09.813 "block_size": 512, 00:14:09.813 "num_blocks": 65536, 00:14:09.813 "uuid": "9bacbfb0-8b28-4e91-b2b8-f5566e743ba3", 00:14:09.813 "assigned_rate_limits": { 00:14:09.813 "rw_ios_per_sec": 0, 00:14:09.813 "rw_mbytes_per_sec": 0, 00:14:09.813 "r_mbytes_per_sec": 0, 00:14:09.813 "w_mbytes_per_sec": 0 00:14:09.813 }, 00:14:09.813 "claimed": false, 00:14:09.813 "zoned": false, 00:14:09.813 "supported_io_types": { 00:14:09.813 "read": true, 00:14:09.813 "write": true, 00:14:09.813 "unmap": true, 00:14:09.813 "flush": true, 00:14:09.813 "reset": true, 00:14:09.813 "nvme_admin": false, 00:14:09.813 "nvme_io": false, 00:14:09.813 "nvme_io_md": false, 00:14:09.813 "write_zeroes": true, 00:14:09.813 "zcopy": true, 00:14:09.813 "get_zone_info": false, 00:14:09.813 "zone_management": false, 00:14:09.813 "zone_append": false, 00:14:09.813 "compare": false, 00:14:09.813 "compare_and_write": false, 00:14:09.813 "abort": true, 00:14:09.813 "seek_hole": false, 00:14:09.813 "seek_data": false, 00:14:09.813 "copy": true, 00:14:09.813 "nvme_iov_md": false 00:14:09.813 }, 00:14:09.813 "memory_domains": [ 00:14:09.813 { 00:14:09.813 "dma_device_id": "system", 00:14:09.813 "dma_device_type": 1 00:14:09.813 }, 00:14:09.813 { 00:14:09.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.813 "dma_device_type": 2 00:14:09.813 } 00:14:09.813 ], 00:14:09.813 "driver_specific": {} 00:14:09.813 } 00:14:09.813 ] 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.813 BaseBdev3 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.813 [ 00:14:09.813 { 00:14:09.813 "name": "BaseBdev3", 00:14:09.813 "aliases": [ 00:14:09.813 "274e414f-8147-457d-a433-7abc2119617d" 00:14:09.813 ], 00:14:09.813 "product_name": "Malloc disk", 00:14:09.813 "block_size": 512, 00:14:09.813 "num_blocks": 65536, 00:14:09.813 "uuid": "274e414f-8147-457d-a433-7abc2119617d", 00:14:09.813 "assigned_rate_limits": { 00:14:09.813 "rw_ios_per_sec": 0, 00:14:09.813 "rw_mbytes_per_sec": 0, 00:14:09.813 "r_mbytes_per_sec": 0, 00:14:09.813 "w_mbytes_per_sec": 0 00:14:09.813 }, 00:14:09.813 "claimed": false, 00:14:09.813 "zoned": false, 00:14:09.813 "supported_io_types": { 00:14:09.813 "read": true, 00:14:09.813 "write": true, 00:14:09.813 "unmap": true, 00:14:09.813 "flush": true, 00:14:09.813 "reset": true, 00:14:09.813 "nvme_admin": false, 00:14:09.813 "nvme_io": false, 00:14:09.813 "nvme_io_md": false, 00:14:09.813 "write_zeroes": true, 00:14:09.813 "zcopy": true, 00:14:09.813 "get_zone_info": false, 00:14:09.813 "zone_management": false, 00:14:09.813 "zone_append": false, 00:14:09.813 "compare": false, 00:14:09.813 "compare_and_write": false, 00:14:09.813 "abort": true, 00:14:09.813 "seek_hole": false, 00:14:09.813 "seek_data": false, 00:14:09.813 "copy": true, 00:14:09.813 "nvme_iov_md": false 00:14:09.813 }, 00:14:09.813 "memory_domains": [ 00:14:09.813 { 00:14:09.813 "dma_device_id": "system", 00:14:09.813 "dma_device_type": 1 00:14:09.813 }, 00:14:09.813 { 00:14:09.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.813 "dma_device_type": 2 00:14:09.813 } 00:14:09.813 ], 00:14:09.813 "driver_specific": {} 00:14:09.813 } 00:14:09.813 ] 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.813 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.071 [2024-10-15 09:13:53.746151] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:10.071 [2024-10-15 09:13:53.746224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:10.071 [2024-10-15 09:13:53.746266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.071 [2024-10-15 09:13:53.748871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.071 "name": "Existed_Raid", 00:14:10.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.071 "strip_size_kb": 64, 00:14:10.071 "state": "configuring", 00:14:10.071 "raid_level": "raid0", 00:14:10.071 "superblock": false, 00:14:10.071 "num_base_bdevs": 3, 00:14:10.071 "num_base_bdevs_discovered": 2, 00:14:10.071 "num_base_bdevs_operational": 3, 00:14:10.071 "base_bdevs_list": [ 00:14:10.071 { 00:14:10.071 "name": "BaseBdev1", 00:14:10.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.071 "is_configured": false, 00:14:10.071 "data_offset": 0, 00:14:10.071 "data_size": 0 00:14:10.071 }, 00:14:10.071 { 00:14:10.071 "name": "BaseBdev2", 00:14:10.071 "uuid": "9bacbfb0-8b28-4e91-b2b8-f5566e743ba3", 00:14:10.071 "is_configured": true, 00:14:10.071 "data_offset": 0, 00:14:10.071 "data_size": 65536 00:14:10.071 }, 00:14:10.071 { 00:14:10.071 "name": "BaseBdev3", 00:14:10.071 "uuid": "274e414f-8147-457d-a433-7abc2119617d", 00:14:10.071 "is_configured": true, 00:14:10.071 "data_offset": 0, 00:14:10.071 "data_size": 65536 00:14:10.071 } 00:14:10.071 ] 00:14:10.071 }' 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.071 09:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.637 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:10.637 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.637 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.637 [2024-10-15 09:13:54.266194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:10.637 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.638 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:10.638 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.638 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.638 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:10.638 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.638 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.638 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.638 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.638 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.638 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.638 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.638 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.638 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.638 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.638 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.638 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.638 "name": "Existed_Raid", 00:14:10.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.638 "strip_size_kb": 64, 00:14:10.638 "state": "configuring", 00:14:10.638 "raid_level": "raid0", 00:14:10.638 "superblock": false, 00:14:10.638 "num_base_bdevs": 3, 00:14:10.638 "num_base_bdevs_discovered": 1, 00:14:10.638 "num_base_bdevs_operational": 3, 00:14:10.638 "base_bdevs_list": [ 00:14:10.638 { 00:14:10.638 "name": "BaseBdev1", 00:14:10.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.638 "is_configured": false, 00:14:10.638 "data_offset": 0, 00:14:10.638 "data_size": 0 00:14:10.638 }, 00:14:10.638 { 00:14:10.638 "name": null, 00:14:10.638 "uuid": "9bacbfb0-8b28-4e91-b2b8-f5566e743ba3", 00:14:10.638 "is_configured": false, 00:14:10.638 "data_offset": 0, 00:14:10.638 "data_size": 65536 00:14:10.638 }, 00:14:10.638 { 00:14:10.638 "name": "BaseBdev3", 00:14:10.638 "uuid": "274e414f-8147-457d-a433-7abc2119617d", 00:14:10.638 "is_configured": true, 00:14:10.638 "data_offset": 0, 00:14:10.638 "data_size": 65536 00:14:10.638 } 00:14:10.638 ] 00:14:10.638 }' 00:14:10.638 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.638 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.896 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:10.896 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.896 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.896 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.896 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.896 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:10.896 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:10.896 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.896 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.155 [2024-10-15 09:13:54.843682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.155 BaseBdev1 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.155 [ 00:14:11.155 { 00:14:11.155 "name": "BaseBdev1", 00:14:11.155 "aliases": [ 00:14:11.155 "27ac6acc-05de-4ce6-9cbe-51cc2378df17" 00:14:11.155 ], 00:14:11.155 "product_name": "Malloc disk", 00:14:11.155 "block_size": 512, 00:14:11.155 "num_blocks": 65536, 00:14:11.155 "uuid": "27ac6acc-05de-4ce6-9cbe-51cc2378df17", 00:14:11.155 "assigned_rate_limits": { 00:14:11.155 "rw_ios_per_sec": 0, 00:14:11.155 "rw_mbytes_per_sec": 0, 00:14:11.155 "r_mbytes_per_sec": 0, 00:14:11.155 "w_mbytes_per_sec": 0 00:14:11.155 }, 00:14:11.155 "claimed": true, 00:14:11.155 "claim_type": "exclusive_write", 00:14:11.155 "zoned": false, 00:14:11.155 "supported_io_types": { 00:14:11.155 "read": true, 00:14:11.155 "write": true, 00:14:11.155 "unmap": true, 00:14:11.155 "flush": true, 00:14:11.155 "reset": true, 00:14:11.155 "nvme_admin": false, 00:14:11.155 "nvme_io": false, 00:14:11.155 "nvme_io_md": false, 00:14:11.155 "write_zeroes": true, 00:14:11.155 "zcopy": true, 00:14:11.155 "get_zone_info": false, 00:14:11.155 "zone_management": false, 00:14:11.155 "zone_append": false, 00:14:11.155 "compare": false, 00:14:11.155 "compare_and_write": false, 00:14:11.155 "abort": true, 00:14:11.155 "seek_hole": false, 00:14:11.155 "seek_data": false, 00:14:11.155 "copy": true, 00:14:11.155 "nvme_iov_md": false 00:14:11.155 }, 00:14:11.155 "memory_domains": [ 00:14:11.155 { 00:14:11.155 "dma_device_id": "system", 00:14:11.155 "dma_device_type": 1 00:14:11.155 }, 00:14:11.155 { 00:14:11.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.155 "dma_device_type": 2 00:14:11.155 } 00:14:11.155 ], 00:14:11.155 "driver_specific": {} 00:14:11.155 } 00:14:11.155 ] 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.155 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.155 "name": "Existed_Raid", 00:14:11.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.155 "strip_size_kb": 64, 00:14:11.155 "state": "configuring", 00:14:11.155 "raid_level": "raid0", 00:14:11.155 "superblock": false, 00:14:11.155 "num_base_bdevs": 3, 00:14:11.155 "num_base_bdevs_discovered": 2, 00:14:11.155 "num_base_bdevs_operational": 3, 00:14:11.155 "base_bdevs_list": [ 00:14:11.155 { 00:14:11.155 "name": "BaseBdev1", 00:14:11.155 "uuid": "27ac6acc-05de-4ce6-9cbe-51cc2378df17", 00:14:11.155 "is_configured": true, 00:14:11.155 "data_offset": 0, 00:14:11.155 "data_size": 65536 00:14:11.155 }, 00:14:11.155 { 00:14:11.155 "name": null, 00:14:11.156 "uuid": "9bacbfb0-8b28-4e91-b2b8-f5566e743ba3", 00:14:11.156 "is_configured": false, 00:14:11.156 "data_offset": 0, 00:14:11.156 "data_size": 65536 00:14:11.156 }, 00:14:11.156 { 00:14:11.156 "name": "BaseBdev3", 00:14:11.156 "uuid": "274e414f-8147-457d-a433-7abc2119617d", 00:14:11.156 "is_configured": true, 00:14:11.156 "data_offset": 0, 00:14:11.156 "data_size": 65536 00:14:11.156 } 00:14:11.156 ] 00:14:11.156 }' 00:14:11.156 09:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.156 09:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.724 [2024-10-15 09:13:55.407908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.724 "name": "Existed_Raid", 00:14:11.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.724 "strip_size_kb": 64, 00:14:11.724 "state": "configuring", 00:14:11.724 "raid_level": "raid0", 00:14:11.724 "superblock": false, 00:14:11.724 "num_base_bdevs": 3, 00:14:11.724 "num_base_bdevs_discovered": 1, 00:14:11.724 "num_base_bdevs_operational": 3, 00:14:11.724 "base_bdevs_list": [ 00:14:11.724 { 00:14:11.724 "name": "BaseBdev1", 00:14:11.724 "uuid": "27ac6acc-05de-4ce6-9cbe-51cc2378df17", 00:14:11.724 "is_configured": true, 00:14:11.724 "data_offset": 0, 00:14:11.724 "data_size": 65536 00:14:11.724 }, 00:14:11.724 { 00:14:11.724 "name": null, 00:14:11.724 "uuid": "9bacbfb0-8b28-4e91-b2b8-f5566e743ba3", 00:14:11.724 "is_configured": false, 00:14:11.724 "data_offset": 0, 00:14:11.724 "data_size": 65536 00:14:11.724 }, 00:14:11.724 { 00:14:11.724 "name": null, 00:14:11.724 "uuid": "274e414f-8147-457d-a433-7abc2119617d", 00:14:11.724 "is_configured": false, 00:14:11.724 "data_offset": 0, 00:14:11.724 "data_size": 65536 00:14:11.724 } 00:14:11.724 ] 00:14:11.724 }' 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.724 09:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.291 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:12.291 09:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.291 09:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.291 09:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.291 09:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.291 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:12.291 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:12.291 09:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.291 09:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.291 [2024-10-15 09:13:56.036151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.291 09:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.291 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:12.291 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.291 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.291 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:12.291 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.291 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.291 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.291 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.291 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.291 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.291 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.291 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.292 09:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.292 09:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.292 09:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.292 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.292 "name": "Existed_Raid", 00:14:12.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.292 "strip_size_kb": 64, 00:14:12.292 "state": "configuring", 00:14:12.292 "raid_level": "raid0", 00:14:12.292 "superblock": false, 00:14:12.292 "num_base_bdevs": 3, 00:14:12.292 "num_base_bdevs_discovered": 2, 00:14:12.292 "num_base_bdevs_operational": 3, 00:14:12.292 "base_bdevs_list": [ 00:14:12.292 { 00:14:12.292 "name": "BaseBdev1", 00:14:12.292 "uuid": "27ac6acc-05de-4ce6-9cbe-51cc2378df17", 00:14:12.292 "is_configured": true, 00:14:12.292 "data_offset": 0, 00:14:12.292 "data_size": 65536 00:14:12.292 }, 00:14:12.292 { 00:14:12.292 "name": null, 00:14:12.292 "uuid": "9bacbfb0-8b28-4e91-b2b8-f5566e743ba3", 00:14:12.292 "is_configured": false, 00:14:12.292 "data_offset": 0, 00:14:12.292 "data_size": 65536 00:14:12.292 }, 00:14:12.292 { 00:14:12.292 "name": "BaseBdev3", 00:14:12.292 "uuid": "274e414f-8147-457d-a433-7abc2119617d", 00:14:12.292 "is_configured": true, 00:14:12.292 "data_offset": 0, 00:14:12.292 "data_size": 65536 00:14:12.292 } 00:14:12.292 ] 00:14:12.292 }' 00:14:12.292 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.292 09:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.859 [2024-10-15 09:13:56.592293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.859 "name": "Existed_Raid", 00:14:12.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.859 "strip_size_kb": 64, 00:14:12.859 "state": "configuring", 00:14:12.859 "raid_level": "raid0", 00:14:12.859 "superblock": false, 00:14:12.859 "num_base_bdevs": 3, 00:14:12.859 "num_base_bdevs_discovered": 1, 00:14:12.859 "num_base_bdevs_operational": 3, 00:14:12.859 "base_bdevs_list": [ 00:14:12.859 { 00:14:12.859 "name": null, 00:14:12.859 "uuid": "27ac6acc-05de-4ce6-9cbe-51cc2378df17", 00:14:12.859 "is_configured": false, 00:14:12.859 "data_offset": 0, 00:14:12.859 "data_size": 65536 00:14:12.859 }, 00:14:12.859 { 00:14:12.859 "name": null, 00:14:12.859 "uuid": "9bacbfb0-8b28-4e91-b2b8-f5566e743ba3", 00:14:12.859 "is_configured": false, 00:14:12.859 "data_offset": 0, 00:14:12.859 "data_size": 65536 00:14:12.859 }, 00:14:12.859 { 00:14:12.859 "name": "BaseBdev3", 00:14:12.859 "uuid": "274e414f-8147-457d-a433-7abc2119617d", 00:14:12.859 "is_configured": true, 00:14:12.859 "data_offset": 0, 00:14:12.859 "data_size": 65536 00:14:12.859 } 00:14:12.859 ] 00:14:12.859 }' 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.859 09:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.505 [2024-10-15 09:13:57.256959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.505 "name": "Existed_Raid", 00:14:13.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.505 "strip_size_kb": 64, 00:14:13.505 "state": "configuring", 00:14:13.505 "raid_level": "raid0", 00:14:13.505 "superblock": false, 00:14:13.505 "num_base_bdevs": 3, 00:14:13.505 "num_base_bdevs_discovered": 2, 00:14:13.505 "num_base_bdevs_operational": 3, 00:14:13.505 "base_bdevs_list": [ 00:14:13.505 { 00:14:13.505 "name": null, 00:14:13.505 "uuid": "27ac6acc-05de-4ce6-9cbe-51cc2378df17", 00:14:13.505 "is_configured": false, 00:14:13.505 "data_offset": 0, 00:14:13.505 "data_size": 65536 00:14:13.505 }, 00:14:13.505 { 00:14:13.505 "name": "BaseBdev2", 00:14:13.505 "uuid": "9bacbfb0-8b28-4e91-b2b8-f5566e743ba3", 00:14:13.505 "is_configured": true, 00:14:13.505 "data_offset": 0, 00:14:13.505 "data_size": 65536 00:14:13.505 }, 00:14:13.505 { 00:14:13.505 "name": "BaseBdev3", 00:14:13.505 "uuid": "274e414f-8147-457d-a433-7abc2119617d", 00:14:13.505 "is_configured": true, 00:14:13.505 "data_offset": 0, 00:14:13.505 "data_size": 65536 00:14:13.505 } 00:14:13.505 ] 00:14:13.505 }' 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.505 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.072 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.072 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.072 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.072 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:14.072 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.072 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:14.072 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.072 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:14.072 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.072 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.072 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.072 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 27ac6acc-05de-4ce6-9cbe-51cc2378df17 00:14:14.072 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.072 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.072 [2024-10-15 09:13:57.898328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:14.072 [2024-10-15 09:13:57.898410] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:14.072 [2024-10-15 09:13:57.898426] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:14.072 [2024-10-15 09:13:57.898765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:14.072 [2024-10-15 09:13:57.898977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:14.073 [2024-10-15 09:13:57.899004] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:14.073 [2024-10-15 09:13:57.899353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.073 NewBaseBdev 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.073 [ 00:14:14.073 { 00:14:14.073 "name": "NewBaseBdev", 00:14:14.073 "aliases": [ 00:14:14.073 "27ac6acc-05de-4ce6-9cbe-51cc2378df17" 00:14:14.073 ], 00:14:14.073 "product_name": "Malloc disk", 00:14:14.073 "block_size": 512, 00:14:14.073 "num_blocks": 65536, 00:14:14.073 "uuid": "27ac6acc-05de-4ce6-9cbe-51cc2378df17", 00:14:14.073 "assigned_rate_limits": { 00:14:14.073 "rw_ios_per_sec": 0, 00:14:14.073 "rw_mbytes_per_sec": 0, 00:14:14.073 "r_mbytes_per_sec": 0, 00:14:14.073 "w_mbytes_per_sec": 0 00:14:14.073 }, 00:14:14.073 "claimed": true, 00:14:14.073 "claim_type": "exclusive_write", 00:14:14.073 "zoned": false, 00:14:14.073 "supported_io_types": { 00:14:14.073 "read": true, 00:14:14.073 "write": true, 00:14:14.073 "unmap": true, 00:14:14.073 "flush": true, 00:14:14.073 "reset": true, 00:14:14.073 "nvme_admin": false, 00:14:14.073 "nvme_io": false, 00:14:14.073 "nvme_io_md": false, 00:14:14.073 "write_zeroes": true, 00:14:14.073 "zcopy": true, 00:14:14.073 "get_zone_info": false, 00:14:14.073 "zone_management": false, 00:14:14.073 "zone_append": false, 00:14:14.073 "compare": false, 00:14:14.073 "compare_and_write": false, 00:14:14.073 "abort": true, 00:14:14.073 "seek_hole": false, 00:14:14.073 "seek_data": false, 00:14:14.073 "copy": true, 00:14:14.073 "nvme_iov_md": false 00:14:14.073 }, 00:14:14.073 "memory_domains": [ 00:14:14.073 { 00:14:14.073 "dma_device_id": "system", 00:14:14.073 "dma_device_type": 1 00:14:14.073 }, 00:14:14.073 { 00:14:14.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.073 "dma_device_type": 2 00:14:14.073 } 00:14:14.073 ], 00:14:14.073 "driver_specific": {} 00:14:14.073 } 00:14:14.073 ] 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.073 "name": "Existed_Raid", 00:14:14.073 "uuid": "925f7e92-feab-42c3-bc03-8019d5253cf0", 00:14:14.073 "strip_size_kb": 64, 00:14:14.073 "state": "online", 00:14:14.073 "raid_level": "raid0", 00:14:14.073 "superblock": false, 00:14:14.073 "num_base_bdevs": 3, 00:14:14.073 "num_base_bdevs_discovered": 3, 00:14:14.073 "num_base_bdevs_operational": 3, 00:14:14.073 "base_bdevs_list": [ 00:14:14.073 { 00:14:14.073 "name": "NewBaseBdev", 00:14:14.073 "uuid": "27ac6acc-05de-4ce6-9cbe-51cc2378df17", 00:14:14.073 "is_configured": true, 00:14:14.073 "data_offset": 0, 00:14:14.073 "data_size": 65536 00:14:14.073 }, 00:14:14.073 { 00:14:14.073 "name": "BaseBdev2", 00:14:14.073 "uuid": "9bacbfb0-8b28-4e91-b2b8-f5566e743ba3", 00:14:14.073 "is_configured": true, 00:14:14.073 "data_offset": 0, 00:14:14.073 "data_size": 65536 00:14:14.073 }, 00:14:14.073 { 00:14:14.073 "name": "BaseBdev3", 00:14:14.073 "uuid": "274e414f-8147-457d-a433-7abc2119617d", 00:14:14.073 "is_configured": true, 00:14:14.073 "data_offset": 0, 00:14:14.073 "data_size": 65536 00:14:14.073 } 00:14:14.073 ] 00:14:14.073 }' 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.073 09:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.640 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:14.640 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:14.640 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:14.640 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:14.640 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:14.640 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:14.640 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:14.641 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.641 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.641 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:14.641 [2024-10-15 09:13:58.446927] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:14.641 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.641 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:14.641 "name": "Existed_Raid", 00:14:14.641 "aliases": [ 00:14:14.641 "925f7e92-feab-42c3-bc03-8019d5253cf0" 00:14:14.641 ], 00:14:14.641 "product_name": "Raid Volume", 00:14:14.641 "block_size": 512, 00:14:14.641 "num_blocks": 196608, 00:14:14.641 "uuid": "925f7e92-feab-42c3-bc03-8019d5253cf0", 00:14:14.641 "assigned_rate_limits": { 00:14:14.641 "rw_ios_per_sec": 0, 00:14:14.641 "rw_mbytes_per_sec": 0, 00:14:14.641 "r_mbytes_per_sec": 0, 00:14:14.641 "w_mbytes_per_sec": 0 00:14:14.641 }, 00:14:14.641 "claimed": false, 00:14:14.641 "zoned": false, 00:14:14.641 "supported_io_types": { 00:14:14.641 "read": true, 00:14:14.641 "write": true, 00:14:14.641 "unmap": true, 00:14:14.641 "flush": true, 00:14:14.641 "reset": true, 00:14:14.641 "nvme_admin": false, 00:14:14.641 "nvme_io": false, 00:14:14.641 "nvme_io_md": false, 00:14:14.641 "write_zeroes": true, 00:14:14.641 "zcopy": false, 00:14:14.641 "get_zone_info": false, 00:14:14.641 "zone_management": false, 00:14:14.641 "zone_append": false, 00:14:14.641 "compare": false, 00:14:14.641 "compare_and_write": false, 00:14:14.641 "abort": false, 00:14:14.641 "seek_hole": false, 00:14:14.641 "seek_data": false, 00:14:14.641 "copy": false, 00:14:14.641 "nvme_iov_md": false 00:14:14.641 }, 00:14:14.641 "memory_domains": [ 00:14:14.641 { 00:14:14.641 "dma_device_id": "system", 00:14:14.641 "dma_device_type": 1 00:14:14.641 }, 00:14:14.641 { 00:14:14.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.641 "dma_device_type": 2 00:14:14.641 }, 00:14:14.641 { 00:14:14.641 "dma_device_id": "system", 00:14:14.641 "dma_device_type": 1 00:14:14.641 }, 00:14:14.641 { 00:14:14.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.641 "dma_device_type": 2 00:14:14.641 }, 00:14:14.641 { 00:14:14.641 "dma_device_id": "system", 00:14:14.641 "dma_device_type": 1 00:14:14.641 }, 00:14:14.641 { 00:14:14.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.641 "dma_device_type": 2 00:14:14.641 } 00:14:14.641 ], 00:14:14.641 "driver_specific": { 00:14:14.641 "raid": { 00:14:14.641 "uuid": "925f7e92-feab-42c3-bc03-8019d5253cf0", 00:14:14.641 "strip_size_kb": 64, 00:14:14.641 "state": "online", 00:14:14.641 "raid_level": "raid0", 00:14:14.641 "superblock": false, 00:14:14.641 "num_base_bdevs": 3, 00:14:14.641 "num_base_bdevs_discovered": 3, 00:14:14.641 "num_base_bdevs_operational": 3, 00:14:14.641 "base_bdevs_list": [ 00:14:14.641 { 00:14:14.641 "name": "NewBaseBdev", 00:14:14.641 "uuid": "27ac6acc-05de-4ce6-9cbe-51cc2378df17", 00:14:14.641 "is_configured": true, 00:14:14.641 "data_offset": 0, 00:14:14.641 "data_size": 65536 00:14:14.641 }, 00:14:14.641 { 00:14:14.641 "name": "BaseBdev2", 00:14:14.641 "uuid": "9bacbfb0-8b28-4e91-b2b8-f5566e743ba3", 00:14:14.641 "is_configured": true, 00:14:14.641 "data_offset": 0, 00:14:14.641 "data_size": 65536 00:14:14.641 }, 00:14:14.641 { 00:14:14.641 "name": "BaseBdev3", 00:14:14.641 "uuid": "274e414f-8147-457d-a433-7abc2119617d", 00:14:14.641 "is_configured": true, 00:14:14.641 "data_offset": 0, 00:14:14.641 "data_size": 65536 00:14:14.641 } 00:14:14.641 ] 00:14:14.641 } 00:14:14.641 } 00:14:14.641 }' 00:14:14.641 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:14.641 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:14.641 BaseBdev2 00:14:14.641 BaseBdev3' 00:14:14.641 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.900 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.901 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:14.901 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.901 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.901 [2024-10-15 09:13:58.770616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:14.901 [2024-10-15 09:13:58.770655] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:14.901 [2024-10-15 09:13:58.770778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:14.901 [2024-10-15 09:13:58.770858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:14.901 [2024-10-15 09:13:58.770879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:14.901 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.901 09:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63999 00:14:14.901 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 63999 ']' 00:14:14.901 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 63999 00:14:14.901 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:14.901 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:14.901 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63999 00:14:14.901 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:14.901 killing process with pid 63999 00:14:14.901 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:14.901 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63999' 00:14:14.901 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 63999 00:14:14.901 [2024-10-15 09:13:58.811091] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:14.901 09:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 63999 00:14:15.469 [2024-10-15 09:13:59.103856] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:16.405 09:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:16.405 00:14:16.405 real 0m12.011s 00:14:16.405 user 0m19.707s 00:14:16.405 sys 0m1.758s 00:14:16.405 09:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:16.405 ************************************ 00:14:16.405 END TEST raid_state_function_test 00:14:16.405 ************************************ 00:14:16.405 09:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.405 09:14:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:14:16.405 09:14:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:16.405 09:14:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:16.405 09:14:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:16.405 ************************************ 00:14:16.405 START TEST raid_state_function_test_sb 00:14:16.405 ************************************ 00:14:16.405 09:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:14:16.405 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:16.405 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:16.405 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:16.405 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:16.405 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:16.405 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:16.405 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:16.405 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:16.405 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:16.405 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:16.406 Process raid pid: 64635 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64635 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64635' 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64635 00:14:16.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 64635 ']' 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:16.406 09:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.663 [2024-10-15 09:14:00.375529] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:14:16.663 [2024-10-15 09:14:00.375706] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.663 [2024-10-15 09:14:00.546217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.921 [2024-10-15 09:14:00.723774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.180 [2024-10-15 09:14:00.959540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.180 [2024-10-15 09:14:00.959614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.747 09:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.747 09:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:17.747 09:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:17.747 09:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.747 09:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.747 [2024-10-15 09:14:01.447567] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:17.747 [2024-10-15 09:14:01.447650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:17.748 [2024-10-15 09:14:01.447669] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:17.748 [2024-10-15 09:14:01.447687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:17.748 [2024-10-15 09:14:01.447697] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:17.748 [2024-10-15 09:14:01.447714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.748 "name": "Existed_Raid", 00:14:17.748 "uuid": "de951429-66f8-4581-b31c-1f912a72984d", 00:14:17.748 "strip_size_kb": 64, 00:14:17.748 "state": "configuring", 00:14:17.748 "raid_level": "raid0", 00:14:17.748 "superblock": true, 00:14:17.748 "num_base_bdevs": 3, 00:14:17.748 "num_base_bdevs_discovered": 0, 00:14:17.748 "num_base_bdevs_operational": 3, 00:14:17.748 "base_bdevs_list": [ 00:14:17.748 { 00:14:17.748 "name": "BaseBdev1", 00:14:17.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.748 "is_configured": false, 00:14:17.748 "data_offset": 0, 00:14:17.748 "data_size": 0 00:14:17.748 }, 00:14:17.748 { 00:14:17.748 "name": "BaseBdev2", 00:14:17.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.748 "is_configured": false, 00:14:17.748 "data_offset": 0, 00:14:17.748 "data_size": 0 00:14:17.748 }, 00:14:17.748 { 00:14:17.748 "name": "BaseBdev3", 00:14:17.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.748 "is_configured": false, 00:14:17.748 "data_offset": 0, 00:14:17.748 "data_size": 0 00:14:17.748 } 00:14:17.748 ] 00:14:17.748 }' 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.748 09:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.315 09:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:18.315 09:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.315 09:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.315 [2024-10-15 09:14:01.951591] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:18.315 [2024-10-15 09:14:01.951810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:18.315 09:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.315 09:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:18.315 09:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.315 09:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.315 [2024-10-15 09:14:01.959611] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:18.315 [2024-10-15 09:14:01.959791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:18.315 [2024-10-15 09:14:01.959916] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:18.315 [2024-10-15 09:14:01.959978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:18.315 [2024-10-15 09:14:01.960224] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:18.315 [2024-10-15 09:14:01.960356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:18.315 09:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.315 09:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:18.315 09:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.315 09:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.315 [2024-10-15 09:14:02.008024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:18.315 BaseBdev1 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.315 [ 00:14:18.315 { 00:14:18.315 "name": "BaseBdev1", 00:14:18.315 "aliases": [ 00:14:18.315 "483680a4-ac1d-4b6a-865d-5c221ef12214" 00:14:18.315 ], 00:14:18.315 "product_name": "Malloc disk", 00:14:18.315 "block_size": 512, 00:14:18.315 "num_blocks": 65536, 00:14:18.315 "uuid": "483680a4-ac1d-4b6a-865d-5c221ef12214", 00:14:18.315 "assigned_rate_limits": { 00:14:18.315 "rw_ios_per_sec": 0, 00:14:18.315 "rw_mbytes_per_sec": 0, 00:14:18.315 "r_mbytes_per_sec": 0, 00:14:18.315 "w_mbytes_per_sec": 0 00:14:18.315 }, 00:14:18.315 "claimed": true, 00:14:18.315 "claim_type": "exclusive_write", 00:14:18.315 "zoned": false, 00:14:18.315 "supported_io_types": { 00:14:18.315 "read": true, 00:14:18.315 "write": true, 00:14:18.315 "unmap": true, 00:14:18.315 "flush": true, 00:14:18.315 "reset": true, 00:14:18.315 "nvme_admin": false, 00:14:18.315 "nvme_io": false, 00:14:18.315 "nvme_io_md": false, 00:14:18.315 "write_zeroes": true, 00:14:18.315 "zcopy": true, 00:14:18.315 "get_zone_info": false, 00:14:18.315 "zone_management": false, 00:14:18.315 "zone_append": false, 00:14:18.315 "compare": false, 00:14:18.315 "compare_and_write": false, 00:14:18.315 "abort": true, 00:14:18.315 "seek_hole": false, 00:14:18.315 "seek_data": false, 00:14:18.315 "copy": true, 00:14:18.315 "nvme_iov_md": false 00:14:18.315 }, 00:14:18.315 "memory_domains": [ 00:14:18.315 { 00:14:18.315 "dma_device_id": "system", 00:14:18.315 "dma_device_type": 1 00:14:18.315 }, 00:14:18.315 { 00:14:18.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.315 "dma_device_type": 2 00:14:18.315 } 00:14:18.315 ], 00:14:18.315 "driver_specific": {} 00:14:18.315 } 00:14:18.315 ] 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.315 "name": "Existed_Raid", 00:14:18.315 "uuid": "f4537c7d-5291-4b1e-9ed2-a12bc278f909", 00:14:18.315 "strip_size_kb": 64, 00:14:18.315 "state": "configuring", 00:14:18.315 "raid_level": "raid0", 00:14:18.315 "superblock": true, 00:14:18.315 "num_base_bdevs": 3, 00:14:18.315 "num_base_bdevs_discovered": 1, 00:14:18.315 "num_base_bdevs_operational": 3, 00:14:18.315 "base_bdevs_list": [ 00:14:18.315 { 00:14:18.315 "name": "BaseBdev1", 00:14:18.315 "uuid": "483680a4-ac1d-4b6a-865d-5c221ef12214", 00:14:18.315 "is_configured": true, 00:14:18.315 "data_offset": 2048, 00:14:18.315 "data_size": 63488 00:14:18.315 }, 00:14:18.315 { 00:14:18.315 "name": "BaseBdev2", 00:14:18.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.315 "is_configured": false, 00:14:18.315 "data_offset": 0, 00:14:18.315 "data_size": 0 00:14:18.315 }, 00:14:18.315 { 00:14:18.315 "name": "BaseBdev3", 00:14:18.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.315 "is_configured": false, 00:14:18.315 "data_offset": 0, 00:14:18.315 "data_size": 0 00:14:18.315 } 00:14:18.315 ] 00:14:18.315 }' 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.315 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.883 [2024-10-15 09:14:02.588347] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:18.883 [2024-10-15 09:14:02.588458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.883 [2024-10-15 09:14:02.600412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:18.883 [2024-10-15 09:14:02.603359] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:18.883 [2024-10-15 09:14:02.603564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:18.883 [2024-10-15 09:14:02.603693] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:18.883 [2024-10-15 09:14:02.603755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.883 "name": "Existed_Raid", 00:14:18.883 "uuid": "8be34a15-f8c2-4f46-8c85-1254854d8c64", 00:14:18.883 "strip_size_kb": 64, 00:14:18.883 "state": "configuring", 00:14:18.883 "raid_level": "raid0", 00:14:18.883 "superblock": true, 00:14:18.883 "num_base_bdevs": 3, 00:14:18.883 "num_base_bdevs_discovered": 1, 00:14:18.883 "num_base_bdevs_operational": 3, 00:14:18.883 "base_bdevs_list": [ 00:14:18.883 { 00:14:18.883 "name": "BaseBdev1", 00:14:18.883 "uuid": "483680a4-ac1d-4b6a-865d-5c221ef12214", 00:14:18.883 "is_configured": true, 00:14:18.883 "data_offset": 2048, 00:14:18.883 "data_size": 63488 00:14:18.883 }, 00:14:18.883 { 00:14:18.883 "name": "BaseBdev2", 00:14:18.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.883 "is_configured": false, 00:14:18.883 "data_offset": 0, 00:14:18.883 "data_size": 0 00:14:18.883 }, 00:14:18.883 { 00:14:18.883 "name": "BaseBdev3", 00:14:18.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.883 "is_configured": false, 00:14:18.883 "data_offset": 0, 00:14:18.883 "data_size": 0 00:14:18.883 } 00:14:18.883 ] 00:14:18.883 }' 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.883 09:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.451 [2024-10-15 09:14:03.182519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.451 BaseBdev2 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.451 [ 00:14:19.451 { 00:14:19.451 "name": "BaseBdev2", 00:14:19.451 "aliases": [ 00:14:19.451 "afea8da3-c47b-4bcb-9219-a0081ee26677" 00:14:19.451 ], 00:14:19.451 "product_name": "Malloc disk", 00:14:19.451 "block_size": 512, 00:14:19.451 "num_blocks": 65536, 00:14:19.451 "uuid": "afea8da3-c47b-4bcb-9219-a0081ee26677", 00:14:19.451 "assigned_rate_limits": { 00:14:19.451 "rw_ios_per_sec": 0, 00:14:19.451 "rw_mbytes_per_sec": 0, 00:14:19.451 "r_mbytes_per_sec": 0, 00:14:19.451 "w_mbytes_per_sec": 0 00:14:19.451 }, 00:14:19.451 "claimed": true, 00:14:19.451 "claim_type": "exclusive_write", 00:14:19.451 "zoned": false, 00:14:19.451 "supported_io_types": { 00:14:19.451 "read": true, 00:14:19.451 "write": true, 00:14:19.451 "unmap": true, 00:14:19.451 "flush": true, 00:14:19.451 "reset": true, 00:14:19.451 "nvme_admin": false, 00:14:19.451 "nvme_io": false, 00:14:19.451 "nvme_io_md": false, 00:14:19.451 "write_zeroes": true, 00:14:19.451 "zcopy": true, 00:14:19.451 "get_zone_info": false, 00:14:19.451 "zone_management": false, 00:14:19.451 "zone_append": false, 00:14:19.451 "compare": false, 00:14:19.451 "compare_and_write": false, 00:14:19.451 "abort": true, 00:14:19.451 "seek_hole": false, 00:14:19.451 "seek_data": false, 00:14:19.451 "copy": true, 00:14:19.451 "nvme_iov_md": false 00:14:19.451 }, 00:14:19.451 "memory_domains": [ 00:14:19.451 { 00:14:19.451 "dma_device_id": "system", 00:14:19.451 "dma_device_type": 1 00:14:19.451 }, 00:14:19.451 { 00:14:19.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.451 "dma_device_type": 2 00:14:19.451 } 00:14:19.451 ], 00:14:19.451 "driver_specific": {} 00:14:19.451 } 00:14:19.451 ] 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.451 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.451 "name": "Existed_Raid", 00:14:19.451 "uuid": "8be34a15-f8c2-4f46-8c85-1254854d8c64", 00:14:19.451 "strip_size_kb": 64, 00:14:19.451 "state": "configuring", 00:14:19.451 "raid_level": "raid0", 00:14:19.451 "superblock": true, 00:14:19.451 "num_base_bdevs": 3, 00:14:19.451 "num_base_bdevs_discovered": 2, 00:14:19.451 "num_base_bdevs_operational": 3, 00:14:19.451 "base_bdevs_list": [ 00:14:19.451 { 00:14:19.451 "name": "BaseBdev1", 00:14:19.451 "uuid": "483680a4-ac1d-4b6a-865d-5c221ef12214", 00:14:19.451 "is_configured": true, 00:14:19.451 "data_offset": 2048, 00:14:19.451 "data_size": 63488 00:14:19.451 }, 00:14:19.451 { 00:14:19.451 "name": "BaseBdev2", 00:14:19.451 "uuid": "afea8da3-c47b-4bcb-9219-a0081ee26677", 00:14:19.451 "is_configured": true, 00:14:19.451 "data_offset": 2048, 00:14:19.451 "data_size": 63488 00:14:19.451 }, 00:14:19.451 { 00:14:19.451 "name": "BaseBdev3", 00:14:19.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.451 "is_configured": false, 00:14:19.451 "data_offset": 0, 00:14:19.452 "data_size": 0 00:14:19.452 } 00:14:19.452 ] 00:14:19.452 }' 00:14:19.452 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.452 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.019 BaseBdev3 00:14:20.019 [2024-10-15 09:14:03.769253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:20.019 [2024-10-15 09:14:03.769664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:20.019 [2024-10-15 09:14:03.769698] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:20.019 [2024-10-15 09:14:03.770074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:20.019 [2024-10-15 09:14:03.770338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:20.019 [2024-10-15 09:14:03.770357] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:20.019 [2024-10-15 09:14:03.770553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.019 [ 00:14:20.019 { 00:14:20.019 "name": "BaseBdev3", 00:14:20.019 "aliases": [ 00:14:20.019 "ef52e152-4862-4c60-9ef8-7ec2e0ba0286" 00:14:20.019 ], 00:14:20.019 "product_name": "Malloc disk", 00:14:20.019 "block_size": 512, 00:14:20.019 "num_blocks": 65536, 00:14:20.019 "uuid": "ef52e152-4862-4c60-9ef8-7ec2e0ba0286", 00:14:20.019 "assigned_rate_limits": { 00:14:20.019 "rw_ios_per_sec": 0, 00:14:20.019 "rw_mbytes_per_sec": 0, 00:14:20.019 "r_mbytes_per_sec": 0, 00:14:20.019 "w_mbytes_per_sec": 0 00:14:20.019 }, 00:14:20.019 "claimed": true, 00:14:20.019 "claim_type": "exclusive_write", 00:14:20.019 "zoned": false, 00:14:20.019 "supported_io_types": { 00:14:20.019 "read": true, 00:14:20.019 "write": true, 00:14:20.019 "unmap": true, 00:14:20.019 "flush": true, 00:14:20.019 "reset": true, 00:14:20.019 "nvme_admin": false, 00:14:20.019 "nvme_io": false, 00:14:20.019 "nvme_io_md": false, 00:14:20.019 "write_zeroes": true, 00:14:20.019 "zcopy": true, 00:14:20.019 "get_zone_info": false, 00:14:20.019 "zone_management": false, 00:14:20.019 "zone_append": false, 00:14:20.019 "compare": false, 00:14:20.019 "compare_and_write": false, 00:14:20.019 "abort": true, 00:14:20.019 "seek_hole": false, 00:14:20.019 "seek_data": false, 00:14:20.019 "copy": true, 00:14:20.019 "nvme_iov_md": false 00:14:20.019 }, 00:14:20.019 "memory_domains": [ 00:14:20.019 { 00:14:20.019 "dma_device_id": "system", 00:14:20.019 "dma_device_type": 1 00:14:20.019 }, 00:14:20.019 { 00:14:20.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.019 "dma_device_type": 2 00:14:20.019 } 00:14:20.019 ], 00:14:20.019 "driver_specific": {} 00:14:20.019 } 00:14:20.019 ] 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.019 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.019 "name": "Existed_Raid", 00:14:20.019 "uuid": "8be34a15-f8c2-4f46-8c85-1254854d8c64", 00:14:20.019 "strip_size_kb": 64, 00:14:20.019 "state": "online", 00:14:20.019 "raid_level": "raid0", 00:14:20.019 "superblock": true, 00:14:20.019 "num_base_bdevs": 3, 00:14:20.020 "num_base_bdevs_discovered": 3, 00:14:20.020 "num_base_bdevs_operational": 3, 00:14:20.020 "base_bdevs_list": [ 00:14:20.020 { 00:14:20.020 "name": "BaseBdev1", 00:14:20.020 "uuid": "483680a4-ac1d-4b6a-865d-5c221ef12214", 00:14:20.020 "is_configured": true, 00:14:20.020 "data_offset": 2048, 00:14:20.020 "data_size": 63488 00:14:20.020 }, 00:14:20.020 { 00:14:20.020 "name": "BaseBdev2", 00:14:20.020 "uuid": "afea8da3-c47b-4bcb-9219-a0081ee26677", 00:14:20.020 "is_configured": true, 00:14:20.020 "data_offset": 2048, 00:14:20.020 "data_size": 63488 00:14:20.020 }, 00:14:20.020 { 00:14:20.020 "name": "BaseBdev3", 00:14:20.020 "uuid": "ef52e152-4862-4c60-9ef8-7ec2e0ba0286", 00:14:20.020 "is_configured": true, 00:14:20.020 "data_offset": 2048, 00:14:20.020 "data_size": 63488 00:14:20.020 } 00:14:20.020 ] 00:14:20.020 }' 00:14:20.020 09:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.020 09:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:20.590 [2024-10-15 09:14:04.305865] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:20.590 "name": "Existed_Raid", 00:14:20.590 "aliases": [ 00:14:20.590 "8be34a15-f8c2-4f46-8c85-1254854d8c64" 00:14:20.590 ], 00:14:20.590 "product_name": "Raid Volume", 00:14:20.590 "block_size": 512, 00:14:20.590 "num_blocks": 190464, 00:14:20.590 "uuid": "8be34a15-f8c2-4f46-8c85-1254854d8c64", 00:14:20.590 "assigned_rate_limits": { 00:14:20.590 "rw_ios_per_sec": 0, 00:14:20.590 "rw_mbytes_per_sec": 0, 00:14:20.590 "r_mbytes_per_sec": 0, 00:14:20.590 "w_mbytes_per_sec": 0 00:14:20.590 }, 00:14:20.590 "claimed": false, 00:14:20.590 "zoned": false, 00:14:20.590 "supported_io_types": { 00:14:20.590 "read": true, 00:14:20.590 "write": true, 00:14:20.590 "unmap": true, 00:14:20.590 "flush": true, 00:14:20.590 "reset": true, 00:14:20.590 "nvme_admin": false, 00:14:20.590 "nvme_io": false, 00:14:20.590 "nvme_io_md": false, 00:14:20.590 "write_zeroes": true, 00:14:20.590 "zcopy": false, 00:14:20.590 "get_zone_info": false, 00:14:20.590 "zone_management": false, 00:14:20.590 "zone_append": false, 00:14:20.590 "compare": false, 00:14:20.590 "compare_and_write": false, 00:14:20.590 "abort": false, 00:14:20.590 "seek_hole": false, 00:14:20.590 "seek_data": false, 00:14:20.590 "copy": false, 00:14:20.590 "nvme_iov_md": false 00:14:20.590 }, 00:14:20.590 "memory_domains": [ 00:14:20.590 { 00:14:20.590 "dma_device_id": "system", 00:14:20.590 "dma_device_type": 1 00:14:20.590 }, 00:14:20.590 { 00:14:20.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.590 "dma_device_type": 2 00:14:20.590 }, 00:14:20.590 { 00:14:20.590 "dma_device_id": "system", 00:14:20.590 "dma_device_type": 1 00:14:20.590 }, 00:14:20.590 { 00:14:20.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.590 "dma_device_type": 2 00:14:20.590 }, 00:14:20.590 { 00:14:20.590 "dma_device_id": "system", 00:14:20.590 "dma_device_type": 1 00:14:20.590 }, 00:14:20.590 { 00:14:20.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.590 "dma_device_type": 2 00:14:20.590 } 00:14:20.590 ], 00:14:20.590 "driver_specific": { 00:14:20.590 "raid": { 00:14:20.590 "uuid": "8be34a15-f8c2-4f46-8c85-1254854d8c64", 00:14:20.590 "strip_size_kb": 64, 00:14:20.590 "state": "online", 00:14:20.590 "raid_level": "raid0", 00:14:20.590 "superblock": true, 00:14:20.590 "num_base_bdevs": 3, 00:14:20.590 "num_base_bdevs_discovered": 3, 00:14:20.590 "num_base_bdevs_operational": 3, 00:14:20.590 "base_bdevs_list": [ 00:14:20.590 { 00:14:20.590 "name": "BaseBdev1", 00:14:20.590 "uuid": "483680a4-ac1d-4b6a-865d-5c221ef12214", 00:14:20.590 "is_configured": true, 00:14:20.590 "data_offset": 2048, 00:14:20.590 "data_size": 63488 00:14:20.590 }, 00:14:20.590 { 00:14:20.590 "name": "BaseBdev2", 00:14:20.590 "uuid": "afea8da3-c47b-4bcb-9219-a0081ee26677", 00:14:20.590 "is_configured": true, 00:14:20.590 "data_offset": 2048, 00:14:20.590 "data_size": 63488 00:14:20.590 }, 00:14:20.590 { 00:14:20.590 "name": "BaseBdev3", 00:14:20.590 "uuid": "ef52e152-4862-4c60-9ef8-7ec2e0ba0286", 00:14:20.590 "is_configured": true, 00:14:20.590 "data_offset": 2048, 00:14:20.590 "data_size": 63488 00:14:20.590 } 00:14:20.590 ] 00:14:20.590 } 00:14:20.590 } 00:14:20.590 }' 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:20.590 BaseBdev2 00:14:20.590 BaseBdev3' 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.590 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.857 [2024-10-15 09:14:04.625625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:20.857 [2024-10-15 09:14:04.625794] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:20.857 [2024-10-15 09:14:04.626032] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.857 "name": "Existed_Raid", 00:14:20.857 "uuid": "8be34a15-f8c2-4f46-8c85-1254854d8c64", 00:14:20.857 "strip_size_kb": 64, 00:14:20.857 "state": "offline", 00:14:20.857 "raid_level": "raid0", 00:14:20.857 "superblock": true, 00:14:20.857 "num_base_bdevs": 3, 00:14:20.857 "num_base_bdevs_discovered": 2, 00:14:20.857 "num_base_bdevs_operational": 2, 00:14:20.857 "base_bdevs_list": [ 00:14:20.857 { 00:14:20.857 "name": null, 00:14:20.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.857 "is_configured": false, 00:14:20.857 "data_offset": 0, 00:14:20.857 "data_size": 63488 00:14:20.857 }, 00:14:20.857 { 00:14:20.857 "name": "BaseBdev2", 00:14:20.857 "uuid": "afea8da3-c47b-4bcb-9219-a0081ee26677", 00:14:20.857 "is_configured": true, 00:14:20.857 "data_offset": 2048, 00:14:20.857 "data_size": 63488 00:14:20.857 }, 00:14:20.857 { 00:14:20.857 "name": "BaseBdev3", 00:14:20.857 "uuid": "ef52e152-4862-4c60-9ef8-7ec2e0ba0286", 00:14:20.857 "is_configured": true, 00:14:20.857 "data_offset": 2048, 00:14:20.857 "data_size": 63488 00:14:20.857 } 00:14:20.857 ] 00:14:20.857 }' 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.857 09:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.425 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:21.425 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:21.425 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:21.425 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.425 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.425 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.425 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.425 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:21.425 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:21.425 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:21.425 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.425 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.425 [2024-10-15 09:14:05.303902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.683 [2024-10-15 09:14:05.461161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:21.683 [2024-10-15 09:14:05.461448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.683 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.941 BaseBdev2 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.941 [ 00:14:21.941 { 00:14:21.941 "name": "BaseBdev2", 00:14:21.941 "aliases": [ 00:14:21.941 "9b69a0cc-dc90-40e2-864d-c5ef49f5cab1" 00:14:21.941 ], 00:14:21.941 "product_name": "Malloc disk", 00:14:21.941 "block_size": 512, 00:14:21.941 "num_blocks": 65536, 00:14:21.941 "uuid": "9b69a0cc-dc90-40e2-864d-c5ef49f5cab1", 00:14:21.941 "assigned_rate_limits": { 00:14:21.941 "rw_ios_per_sec": 0, 00:14:21.941 "rw_mbytes_per_sec": 0, 00:14:21.941 "r_mbytes_per_sec": 0, 00:14:21.941 "w_mbytes_per_sec": 0 00:14:21.941 }, 00:14:21.941 "claimed": false, 00:14:21.941 "zoned": false, 00:14:21.941 "supported_io_types": { 00:14:21.941 "read": true, 00:14:21.941 "write": true, 00:14:21.941 "unmap": true, 00:14:21.941 "flush": true, 00:14:21.941 "reset": true, 00:14:21.941 "nvme_admin": false, 00:14:21.941 "nvme_io": false, 00:14:21.941 "nvme_io_md": false, 00:14:21.941 "write_zeroes": true, 00:14:21.941 "zcopy": true, 00:14:21.941 "get_zone_info": false, 00:14:21.941 "zone_management": false, 00:14:21.941 "zone_append": false, 00:14:21.941 "compare": false, 00:14:21.941 "compare_and_write": false, 00:14:21.941 "abort": true, 00:14:21.941 "seek_hole": false, 00:14:21.941 "seek_data": false, 00:14:21.941 "copy": true, 00:14:21.941 "nvme_iov_md": false 00:14:21.941 }, 00:14:21.941 "memory_domains": [ 00:14:21.941 { 00:14:21.941 "dma_device_id": "system", 00:14:21.941 "dma_device_type": 1 00:14:21.941 }, 00:14:21.941 { 00:14:21.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.941 "dma_device_type": 2 00:14:21.941 } 00:14:21.941 ], 00:14:21.941 "driver_specific": {} 00:14:21.941 } 00:14:21.941 ] 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.941 BaseBdev3 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.941 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.942 [ 00:14:21.942 { 00:14:21.942 "name": "BaseBdev3", 00:14:21.942 "aliases": [ 00:14:21.942 "edd6701b-1611-4abf-b66a-3fa545d92c87" 00:14:21.942 ], 00:14:21.942 "product_name": "Malloc disk", 00:14:21.942 "block_size": 512, 00:14:21.942 "num_blocks": 65536, 00:14:21.942 "uuid": "edd6701b-1611-4abf-b66a-3fa545d92c87", 00:14:21.942 "assigned_rate_limits": { 00:14:21.942 "rw_ios_per_sec": 0, 00:14:21.942 "rw_mbytes_per_sec": 0, 00:14:21.942 "r_mbytes_per_sec": 0, 00:14:21.942 "w_mbytes_per_sec": 0 00:14:21.942 }, 00:14:21.942 "claimed": false, 00:14:21.942 "zoned": false, 00:14:21.942 "supported_io_types": { 00:14:21.942 "read": true, 00:14:21.942 "write": true, 00:14:21.942 "unmap": true, 00:14:21.942 "flush": true, 00:14:21.942 "reset": true, 00:14:21.942 "nvme_admin": false, 00:14:21.942 "nvme_io": false, 00:14:21.942 "nvme_io_md": false, 00:14:21.942 "write_zeroes": true, 00:14:21.942 "zcopy": true, 00:14:21.942 "get_zone_info": false, 00:14:21.942 "zone_management": false, 00:14:21.942 "zone_append": false, 00:14:21.942 "compare": false, 00:14:21.942 "compare_and_write": false, 00:14:21.942 "abort": true, 00:14:21.942 "seek_hole": false, 00:14:21.942 "seek_data": false, 00:14:21.942 "copy": true, 00:14:21.942 "nvme_iov_md": false 00:14:21.942 }, 00:14:21.942 "memory_domains": [ 00:14:21.942 { 00:14:21.942 "dma_device_id": "system", 00:14:21.942 "dma_device_type": 1 00:14:21.942 }, 00:14:21.942 { 00:14:21.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.942 "dma_device_type": 2 00:14:21.942 } 00:14:21.942 ], 00:14:21.942 "driver_specific": {} 00:14:21.942 } 00:14:21.942 ] 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.942 [2024-10-15 09:14:05.772501] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.942 [2024-10-15 09:14:05.772709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.942 [2024-10-15 09:14:05.772867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:21.942 [2024-10-15 09:14:05.775681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.942 "name": "Existed_Raid", 00:14:21.942 "uuid": "6527222e-9f48-4ba8-98ac-31ce70d6561c", 00:14:21.942 "strip_size_kb": 64, 00:14:21.942 "state": "configuring", 00:14:21.942 "raid_level": "raid0", 00:14:21.942 "superblock": true, 00:14:21.942 "num_base_bdevs": 3, 00:14:21.942 "num_base_bdevs_discovered": 2, 00:14:21.942 "num_base_bdevs_operational": 3, 00:14:21.942 "base_bdevs_list": [ 00:14:21.942 { 00:14:21.942 "name": "BaseBdev1", 00:14:21.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.942 "is_configured": false, 00:14:21.942 "data_offset": 0, 00:14:21.942 "data_size": 0 00:14:21.942 }, 00:14:21.942 { 00:14:21.942 "name": "BaseBdev2", 00:14:21.942 "uuid": "9b69a0cc-dc90-40e2-864d-c5ef49f5cab1", 00:14:21.942 "is_configured": true, 00:14:21.942 "data_offset": 2048, 00:14:21.942 "data_size": 63488 00:14:21.942 }, 00:14:21.942 { 00:14:21.942 "name": "BaseBdev3", 00:14:21.942 "uuid": "edd6701b-1611-4abf-b66a-3fa545d92c87", 00:14:21.942 "is_configured": true, 00:14:21.942 "data_offset": 2048, 00:14:21.942 "data_size": 63488 00:14:21.942 } 00:14:21.942 ] 00:14:21.942 }' 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.942 09:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.509 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:22.509 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.509 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.509 [2024-10-15 09:14:06.300612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:22.509 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.509 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:22.509 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.509 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.509 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:22.509 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.509 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.509 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.509 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.509 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.510 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.510 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.510 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.510 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.510 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.510 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.510 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.510 "name": "Existed_Raid", 00:14:22.510 "uuid": "6527222e-9f48-4ba8-98ac-31ce70d6561c", 00:14:22.510 "strip_size_kb": 64, 00:14:22.510 "state": "configuring", 00:14:22.510 "raid_level": "raid0", 00:14:22.510 "superblock": true, 00:14:22.510 "num_base_bdevs": 3, 00:14:22.510 "num_base_bdevs_discovered": 1, 00:14:22.510 "num_base_bdevs_operational": 3, 00:14:22.510 "base_bdevs_list": [ 00:14:22.510 { 00:14:22.510 "name": "BaseBdev1", 00:14:22.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.510 "is_configured": false, 00:14:22.510 "data_offset": 0, 00:14:22.510 "data_size": 0 00:14:22.510 }, 00:14:22.510 { 00:14:22.510 "name": null, 00:14:22.510 "uuid": "9b69a0cc-dc90-40e2-864d-c5ef49f5cab1", 00:14:22.510 "is_configured": false, 00:14:22.510 "data_offset": 0, 00:14:22.510 "data_size": 63488 00:14:22.510 }, 00:14:22.510 { 00:14:22.510 "name": "BaseBdev3", 00:14:22.510 "uuid": "edd6701b-1611-4abf-b66a-3fa545d92c87", 00:14:22.510 "is_configured": true, 00:14:22.510 "data_offset": 2048, 00:14:22.510 "data_size": 63488 00:14:22.510 } 00:14:22.510 ] 00:14:22.510 }' 00:14:22.510 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.510 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.077 [2024-10-15 09:14:06.946645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.077 BaseBdev1 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.077 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.077 [ 00:14:23.077 { 00:14:23.077 "name": "BaseBdev1", 00:14:23.077 "aliases": [ 00:14:23.077 "2012e4b3-e50d-4269-9304-38750edb6787" 00:14:23.077 ], 00:14:23.077 "product_name": "Malloc disk", 00:14:23.077 "block_size": 512, 00:14:23.077 "num_blocks": 65536, 00:14:23.077 "uuid": "2012e4b3-e50d-4269-9304-38750edb6787", 00:14:23.077 "assigned_rate_limits": { 00:14:23.077 "rw_ios_per_sec": 0, 00:14:23.077 "rw_mbytes_per_sec": 0, 00:14:23.077 "r_mbytes_per_sec": 0, 00:14:23.077 "w_mbytes_per_sec": 0 00:14:23.077 }, 00:14:23.077 "claimed": true, 00:14:23.077 "claim_type": "exclusive_write", 00:14:23.077 "zoned": false, 00:14:23.077 "supported_io_types": { 00:14:23.078 "read": true, 00:14:23.078 "write": true, 00:14:23.078 "unmap": true, 00:14:23.078 "flush": true, 00:14:23.078 "reset": true, 00:14:23.078 "nvme_admin": false, 00:14:23.078 "nvme_io": false, 00:14:23.078 "nvme_io_md": false, 00:14:23.078 "write_zeroes": true, 00:14:23.078 "zcopy": true, 00:14:23.078 "get_zone_info": false, 00:14:23.078 "zone_management": false, 00:14:23.078 "zone_append": false, 00:14:23.078 "compare": false, 00:14:23.078 "compare_and_write": false, 00:14:23.078 "abort": true, 00:14:23.078 "seek_hole": false, 00:14:23.078 "seek_data": false, 00:14:23.078 "copy": true, 00:14:23.078 "nvme_iov_md": false 00:14:23.078 }, 00:14:23.078 "memory_domains": [ 00:14:23.078 { 00:14:23.078 "dma_device_id": "system", 00:14:23.078 "dma_device_type": 1 00:14:23.078 }, 00:14:23.078 { 00:14:23.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.078 "dma_device_type": 2 00:14:23.078 } 00:14:23.078 ], 00:14:23.078 "driver_specific": {} 00:14:23.078 } 00:14:23.078 ] 00:14:23.078 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.078 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:23.078 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:23.078 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.078 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.078 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:23.078 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.078 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.078 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.078 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.078 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.078 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.078 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.078 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.078 09:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.078 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.078 09:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.336 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.336 "name": "Existed_Raid", 00:14:23.336 "uuid": "6527222e-9f48-4ba8-98ac-31ce70d6561c", 00:14:23.336 "strip_size_kb": 64, 00:14:23.336 "state": "configuring", 00:14:23.336 "raid_level": "raid0", 00:14:23.336 "superblock": true, 00:14:23.336 "num_base_bdevs": 3, 00:14:23.336 "num_base_bdevs_discovered": 2, 00:14:23.336 "num_base_bdevs_operational": 3, 00:14:23.336 "base_bdevs_list": [ 00:14:23.336 { 00:14:23.336 "name": "BaseBdev1", 00:14:23.336 "uuid": "2012e4b3-e50d-4269-9304-38750edb6787", 00:14:23.336 "is_configured": true, 00:14:23.336 "data_offset": 2048, 00:14:23.336 "data_size": 63488 00:14:23.336 }, 00:14:23.336 { 00:14:23.336 "name": null, 00:14:23.336 "uuid": "9b69a0cc-dc90-40e2-864d-c5ef49f5cab1", 00:14:23.336 "is_configured": false, 00:14:23.336 "data_offset": 0, 00:14:23.336 "data_size": 63488 00:14:23.336 }, 00:14:23.336 { 00:14:23.336 "name": "BaseBdev3", 00:14:23.336 "uuid": "edd6701b-1611-4abf-b66a-3fa545d92c87", 00:14:23.336 "is_configured": true, 00:14:23.336 "data_offset": 2048, 00:14:23.336 "data_size": 63488 00:14:23.336 } 00:14:23.336 ] 00:14:23.336 }' 00:14:23.336 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.336 09:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.593 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.593 09:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.593 09:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.593 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:23.593 09:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.898 [2024-10-15 09:14:07.558924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.898 "name": "Existed_Raid", 00:14:23.898 "uuid": "6527222e-9f48-4ba8-98ac-31ce70d6561c", 00:14:23.898 "strip_size_kb": 64, 00:14:23.898 "state": "configuring", 00:14:23.898 "raid_level": "raid0", 00:14:23.898 "superblock": true, 00:14:23.898 "num_base_bdevs": 3, 00:14:23.898 "num_base_bdevs_discovered": 1, 00:14:23.898 "num_base_bdevs_operational": 3, 00:14:23.898 "base_bdevs_list": [ 00:14:23.898 { 00:14:23.898 "name": "BaseBdev1", 00:14:23.898 "uuid": "2012e4b3-e50d-4269-9304-38750edb6787", 00:14:23.898 "is_configured": true, 00:14:23.898 "data_offset": 2048, 00:14:23.898 "data_size": 63488 00:14:23.898 }, 00:14:23.898 { 00:14:23.898 "name": null, 00:14:23.898 "uuid": "9b69a0cc-dc90-40e2-864d-c5ef49f5cab1", 00:14:23.898 "is_configured": false, 00:14:23.898 "data_offset": 0, 00:14:23.898 "data_size": 63488 00:14:23.898 }, 00:14:23.898 { 00:14:23.898 "name": null, 00:14:23.898 "uuid": "edd6701b-1611-4abf-b66a-3fa545d92c87", 00:14:23.898 "is_configured": false, 00:14:23.898 "data_offset": 0, 00:14:23.898 "data_size": 63488 00:14:23.898 } 00:14:23.898 ] 00:14:23.898 }' 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.898 09:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.163 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.163 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.163 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.163 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.423 [2024-10-15 09:14:08.127143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.423 "name": "Existed_Raid", 00:14:24.423 "uuid": "6527222e-9f48-4ba8-98ac-31ce70d6561c", 00:14:24.423 "strip_size_kb": 64, 00:14:24.423 "state": "configuring", 00:14:24.423 "raid_level": "raid0", 00:14:24.423 "superblock": true, 00:14:24.423 "num_base_bdevs": 3, 00:14:24.423 "num_base_bdevs_discovered": 2, 00:14:24.423 "num_base_bdevs_operational": 3, 00:14:24.423 "base_bdevs_list": [ 00:14:24.423 { 00:14:24.423 "name": "BaseBdev1", 00:14:24.423 "uuid": "2012e4b3-e50d-4269-9304-38750edb6787", 00:14:24.423 "is_configured": true, 00:14:24.423 "data_offset": 2048, 00:14:24.423 "data_size": 63488 00:14:24.423 }, 00:14:24.423 { 00:14:24.423 "name": null, 00:14:24.423 "uuid": "9b69a0cc-dc90-40e2-864d-c5ef49f5cab1", 00:14:24.423 "is_configured": false, 00:14:24.423 "data_offset": 0, 00:14:24.423 "data_size": 63488 00:14:24.423 }, 00:14:24.423 { 00:14:24.423 "name": "BaseBdev3", 00:14:24.423 "uuid": "edd6701b-1611-4abf-b66a-3fa545d92c87", 00:14:24.423 "is_configured": true, 00:14:24.423 "data_offset": 2048, 00:14:24.423 "data_size": 63488 00:14:24.423 } 00:14:24.423 ] 00:14:24.423 }' 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.423 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.991 [2024-10-15 09:14:08.679318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.991 "name": "Existed_Raid", 00:14:24.991 "uuid": "6527222e-9f48-4ba8-98ac-31ce70d6561c", 00:14:24.991 "strip_size_kb": 64, 00:14:24.991 "state": "configuring", 00:14:24.991 "raid_level": "raid0", 00:14:24.991 "superblock": true, 00:14:24.991 "num_base_bdevs": 3, 00:14:24.991 "num_base_bdevs_discovered": 1, 00:14:24.991 "num_base_bdevs_operational": 3, 00:14:24.991 "base_bdevs_list": [ 00:14:24.991 { 00:14:24.991 "name": null, 00:14:24.991 "uuid": "2012e4b3-e50d-4269-9304-38750edb6787", 00:14:24.991 "is_configured": false, 00:14:24.991 "data_offset": 0, 00:14:24.991 "data_size": 63488 00:14:24.991 }, 00:14:24.991 { 00:14:24.991 "name": null, 00:14:24.991 "uuid": "9b69a0cc-dc90-40e2-864d-c5ef49f5cab1", 00:14:24.991 "is_configured": false, 00:14:24.991 "data_offset": 0, 00:14:24.991 "data_size": 63488 00:14:24.991 }, 00:14:24.991 { 00:14:24.991 "name": "BaseBdev3", 00:14:24.991 "uuid": "edd6701b-1611-4abf-b66a-3fa545d92c87", 00:14:24.991 "is_configured": true, 00:14:24.991 "data_offset": 2048, 00:14:24.991 "data_size": 63488 00:14:24.991 } 00:14:24.991 ] 00:14:24.991 }' 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.991 09:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.559 [2024-10-15 09:14:09.364764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.559 "name": "Existed_Raid", 00:14:25.559 "uuid": "6527222e-9f48-4ba8-98ac-31ce70d6561c", 00:14:25.559 "strip_size_kb": 64, 00:14:25.559 "state": "configuring", 00:14:25.559 "raid_level": "raid0", 00:14:25.559 "superblock": true, 00:14:25.559 "num_base_bdevs": 3, 00:14:25.559 "num_base_bdevs_discovered": 2, 00:14:25.559 "num_base_bdevs_operational": 3, 00:14:25.559 "base_bdevs_list": [ 00:14:25.559 { 00:14:25.559 "name": null, 00:14:25.559 "uuid": "2012e4b3-e50d-4269-9304-38750edb6787", 00:14:25.559 "is_configured": false, 00:14:25.559 "data_offset": 0, 00:14:25.559 "data_size": 63488 00:14:25.559 }, 00:14:25.559 { 00:14:25.559 "name": "BaseBdev2", 00:14:25.559 "uuid": "9b69a0cc-dc90-40e2-864d-c5ef49f5cab1", 00:14:25.559 "is_configured": true, 00:14:25.559 "data_offset": 2048, 00:14:25.559 "data_size": 63488 00:14:25.559 }, 00:14:25.559 { 00:14:25.559 "name": "BaseBdev3", 00:14:25.559 "uuid": "edd6701b-1611-4abf-b66a-3fa545d92c87", 00:14:25.559 "is_configured": true, 00:14:25.559 "data_offset": 2048, 00:14:25.559 "data_size": 63488 00:14:25.559 } 00:14:25.559 ] 00:14:25.559 }' 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.559 09:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.125 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:26.125 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.125 09:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.125 09:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.125 09:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.125 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:26.125 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.125 09:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.125 09:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:26.125 09:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.125 09:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.125 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2012e4b3-e50d-4269-9304-38750edb6787 00:14:26.125 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.125 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.383 [2024-10-15 09:14:10.074753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:26.383 [2024-10-15 09:14:10.075088] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:26.383 [2024-10-15 09:14:10.075113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:26.383 NewBaseBdev 00:14:26.384 [2024-10-15 09:14:10.075487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:26.384 [2024-10-15 09:14:10.075683] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:26.384 [2024-10-15 09:14:10.075706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:26.384 [2024-10-15 09:14:10.075910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.384 [ 00:14:26.384 { 00:14:26.384 "name": "NewBaseBdev", 00:14:26.384 "aliases": [ 00:14:26.384 "2012e4b3-e50d-4269-9304-38750edb6787" 00:14:26.384 ], 00:14:26.384 "product_name": "Malloc disk", 00:14:26.384 "block_size": 512, 00:14:26.384 "num_blocks": 65536, 00:14:26.384 "uuid": "2012e4b3-e50d-4269-9304-38750edb6787", 00:14:26.384 "assigned_rate_limits": { 00:14:26.384 "rw_ios_per_sec": 0, 00:14:26.384 "rw_mbytes_per_sec": 0, 00:14:26.384 "r_mbytes_per_sec": 0, 00:14:26.384 "w_mbytes_per_sec": 0 00:14:26.384 }, 00:14:26.384 "claimed": true, 00:14:26.384 "claim_type": "exclusive_write", 00:14:26.384 "zoned": false, 00:14:26.384 "supported_io_types": { 00:14:26.384 "read": true, 00:14:26.384 "write": true, 00:14:26.384 "unmap": true, 00:14:26.384 "flush": true, 00:14:26.384 "reset": true, 00:14:26.384 "nvme_admin": false, 00:14:26.384 "nvme_io": false, 00:14:26.384 "nvme_io_md": false, 00:14:26.384 "write_zeroes": true, 00:14:26.384 "zcopy": true, 00:14:26.384 "get_zone_info": false, 00:14:26.384 "zone_management": false, 00:14:26.384 "zone_append": false, 00:14:26.384 "compare": false, 00:14:26.384 "compare_and_write": false, 00:14:26.384 "abort": true, 00:14:26.384 "seek_hole": false, 00:14:26.384 "seek_data": false, 00:14:26.384 "copy": true, 00:14:26.384 "nvme_iov_md": false 00:14:26.384 }, 00:14:26.384 "memory_domains": [ 00:14:26.384 { 00:14:26.384 "dma_device_id": "system", 00:14:26.384 "dma_device_type": 1 00:14:26.384 }, 00:14:26.384 { 00:14:26.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.384 "dma_device_type": 2 00:14:26.384 } 00:14:26.384 ], 00:14:26.384 "driver_specific": {} 00:14:26.384 } 00:14:26.384 ] 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.384 "name": "Existed_Raid", 00:14:26.384 "uuid": "6527222e-9f48-4ba8-98ac-31ce70d6561c", 00:14:26.384 "strip_size_kb": 64, 00:14:26.384 "state": "online", 00:14:26.384 "raid_level": "raid0", 00:14:26.384 "superblock": true, 00:14:26.384 "num_base_bdevs": 3, 00:14:26.384 "num_base_bdevs_discovered": 3, 00:14:26.384 "num_base_bdevs_operational": 3, 00:14:26.384 "base_bdevs_list": [ 00:14:26.384 { 00:14:26.384 "name": "NewBaseBdev", 00:14:26.384 "uuid": "2012e4b3-e50d-4269-9304-38750edb6787", 00:14:26.384 "is_configured": true, 00:14:26.384 "data_offset": 2048, 00:14:26.384 "data_size": 63488 00:14:26.384 }, 00:14:26.384 { 00:14:26.384 "name": "BaseBdev2", 00:14:26.384 "uuid": "9b69a0cc-dc90-40e2-864d-c5ef49f5cab1", 00:14:26.384 "is_configured": true, 00:14:26.384 "data_offset": 2048, 00:14:26.384 "data_size": 63488 00:14:26.384 }, 00:14:26.384 { 00:14:26.384 "name": "BaseBdev3", 00:14:26.384 "uuid": "edd6701b-1611-4abf-b66a-3fa545d92c87", 00:14:26.384 "is_configured": true, 00:14:26.384 "data_offset": 2048, 00:14:26.384 "data_size": 63488 00:14:26.384 } 00:14:26.384 ] 00:14:26.384 }' 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.384 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.953 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:26.953 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:26.953 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:26.953 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:26.953 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:26.953 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:26.953 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:26.953 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:26.953 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.953 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.953 [2024-10-15 09:14:10.599374] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.953 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.953 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:26.953 "name": "Existed_Raid", 00:14:26.953 "aliases": [ 00:14:26.953 "6527222e-9f48-4ba8-98ac-31ce70d6561c" 00:14:26.953 ], 00:14:26.953 "product_name": "Raid Volume", 00:14:26.953 "block_size": 512, 00:14:26.953 "num_blocks": 190464, 00:14:26.953 "uuid": "6527222e-9f48-4ba8-98ac-31ce70d6561c", 00:14:26.953 "assigned_rate_limits": { 00:14:26.953 "rw_ios_per_sec": 0, 00:14:26.953 "rw_mbytes_per_sec": 0, 00:14:26.953 "r_mbytes_per_sec": 0, 00:14:26.953 "w_mbytes_per_sec": 0 00:14:26.953 }, 00:14:26.953 "claimed": false, 00:14:26.953 "zoned": false, 00:14:26.953 "supported_io_types": { 00:14:26.953 "read": true, 00:14:26.953 "write": true, 00:14:26.953 "unmap": true, 00:14:26.953 "flush": true, 00:14:26.953 "reset": true, 00:14:26.953 "nvme_admin": false, 00:14:26.953 "nvme_io": false, 00:14:26.953 "nvme_io_md": false, 00:14:26.953 "write_zeroes": true, 00:14:26.953 "zcopy": false, 00:14:26.953 "get_zone_info": false, 00:14:26.953 "zone_management": false, 00:14:26.953 "zone_append": false, 00:14:26.953 "compare": false, 00:14:26.953 "compare_and_write": false, 00:14:26.953 "abort": false, 00:14:26.953 "seek_hole": false, 00:14:26.953 "seek_data": false, 00:14:26.953 "copy": false, 00:14:26.953 "nvme_iov_md": false 00:14:26.953 }, 00:14:26.953 "memory_domains": [ 00:14:26.953 { 00:14:26.953 "dma_device_id": "system", 00:14:26.953 "dma_device_type": 1 00:14:26.953 }, 00:14:26.953 { 00:14:26.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.953 "dma_device_type": 2 00:14:26.953 }, 00:14:26.953 { 00:14:26.953 "dma_device_id": "system", 00:14:26.953 "dma_device_type": 1 00:14:26.953 }, 00:14:26.953 { 00:14:26.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.953 "dma_device_type": 2 00:14:26.953 }, 00:14:26.953 { 00:14:26.953 "dma_device_id": "system", 00:14:26.953 "dma_device_type": 1 00:14:26.953 }, 00:14:26.953 { 00:14:26.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.953 "dma_device_type": 2 00:14:26.953 } 00:14:26.953 ], 00:14:26.953 "driver_specific": { 00:14:26.953 "raid": { 00:14:26.953 "uuid": "6527222e-9f48-4ba8-98ac-31ce70d6561c", 00:14:26.953 "strip_size_kb": 64, 00:14:26.953 "state": "online", 00:14:26.953 "raid_level": "raid0", 00:14:26.953 "superblock": true, 00:14:26.953 "num_base_bdevs": 3, 00:14:26.953 "num_base_bdevs_discovered": 3, 00:14:26.953 "num_base_bdevs_operational": 3, 00:14:26.953 "base_bdevs_list": [ 00:14:26.953 { 00:14:26.953 "name": "NewBaseBdev", 00:14:26.953 "uuid": "2012e4b3-e50d-4269-9304-38750edb6787", 00:14:26.953 "is_configured": true, 00:14:26.953 "data_offset": 2048, 00:14:26.953 "data_size": 63488 00:14:26.953 }, 00:14:26.953 { 00:14:26.953 "name": "BaseBdev2", 00:14:26.953 "uuid": "9b69a0cc-dc90-40e2-864d-c5ef49f5cab1", 00:14:26.953 "is_configured": true, 00:14:26.953 "data_offset": 2048, 00:14:26.953 "data_size": 63488 00:14:26.953 }, 00:14:26.954 { 00:14:26.954 "name": "BaseBdev3", 00:14:26.954 "uuid": "edd6701b-1611-4abf-b66a-3fa545d92c87", 00:14:26.954 "is_configured": true, 00:14:26.954 "data_offset": 2048, 00:14:26.954 "data_size": 63488 00:14:26.954 } 00:14:26.954 ] 00:14:26.954 } 00:14:26.954 } 00:14:26.954 }' 00:14:26.954 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:26.954 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:26.954 BaseBdev2 00:14:26.954 BaseBdev3' 00:14:26.954 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.954 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:26.954 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.954 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:26.954 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.954 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.954 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.954 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.954 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.954 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.954 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.954 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:26.954 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.954 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.954 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.954 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.213 [2024-10-15 09:14:10.959075] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:27.213 [2024-10-15 09:14:10.959254] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.213 [2024-10-15 09:14:10.959516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.213 [2024-10-15 09:14:10.959704] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.213 [2024-10-15 09:14:10.959740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64635 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 64635 ']' 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 64635 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64635 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:27.213 09:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64635' 00:14:27.213 killing process with pid 64635 00:14:27.213 09:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 64635 00:14:27.213 [2024-10-15 09:14:11.000827] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.213 09:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 64635 00:14:27.472 [2024-10-15 09:14:11.294529] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.847 09:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:28.847 00:14:28.847 real 0m12.145s 00:14:28.847 user 0m19.943s 00:14:28.847 sys 0m1.755s 00:14:28.847 09:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:28.847 ************************************ 00:14:28.847 END TEST raid_state_function_test_sb 00:14:28.847 ************************************ 00:14:28.847 09:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.847 09:14:12 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:14:28.847 09:14:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:28.847 09:14:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:28.847 09:14:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:28.847 ************************************ 00:14:28.847 START TEST raid_superblock_test 00:14:28.847 ************************************ 00:14:28.847 09:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:14:28.847 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:14:28.847 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:28.847 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:28.847 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65273 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65273 00:14:28.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 65273 ']' 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:28.848 09:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.848 [2024-10-15 09:14:12.585345] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:14:28.848 [2024-10-15 09:14:12.585514] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65273 ] 00:14:28.848 [2024-10-15 09:14:12.753844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.105 [2024-10-15 09:14:12.902337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.363 [2024-10-15 09:14:13.133750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.363 [2024-10-15 09:14:13.134223] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.928 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:29.928 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:29.928 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:29.928 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:29.928 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:29.928 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:29.928 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:29.928 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:29.928 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:29.928 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:29.928 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:29.928 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.928 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.928 malloc1 00:14:29.928 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.928 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:29.928 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.928 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.928 [2024-10-15 09:14:13.616195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:29.928 [2024-10-15 09:14:13.616423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.928 [2024-10-15 09:14:13.616506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:29.928 [2024-10-15 09:14:13.616790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.928 [2024-10-15 09:14:13.619803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.929 [2024-10-15 09:14:13.619848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:29.929 pt1 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.929 malloc2 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.929 [2024-10-15 09:14:13.671704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:29.929 [2024-10-15 09:14:13.671906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.929 [2024-10-15 09:14:13.671951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:29.929 [2024-10-15 09:14:13.671967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.929 [2024-10-15 09:14:13.674922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.929 [2024-10-15 09:14:13.675080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:29.929 pt2 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.929 malloc3 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.929 [2024-10-15 09:14:13.738560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:29.929 [2024-10-15 09:14:13.738638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.929 [2024-10-15 09:14:13.738677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:29.929 [2024-10-15 09:14:13.738694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.929 [2024-10-15 09:14:13.741670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.929 [2024-10-15 09:14:13.741839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:29.929 pt3 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.929 [2024-10-15 09:14:13.750786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:29.929 [2024-10-15 09:14:13.753478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:29.929 [2024-10-15 09:14:13.753703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:29.929 [2024-10-15 09:14:13.753997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:29.929 [2024-10-15 09:14:13.754134] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:29.929 [2024-10-15 09:14:13.754611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:29.929 [2024-10-15 09:14:13.754970] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:29.929 [2024-10-15 09:14:13.755093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:29.929 [2024-10-15 09:14:13.755513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.929 "name": "raid_bdev1", 00:14:29.929 "uuid": "32fbd20e-febc-4f2b-8e5e-370e955ddaa8", 00:14:29.929 "strip_size_kb": 64, 00:14:29.929 "state": "online", 00:14:29.929 "raid_level": "raid0", 00:14:29.929 "superblock": true, 00:14:29.929 "num_base_bdevs": 3, 00:14:29.929 "num_base_bdevs_discovered": 3, 00:14:29.929 "num_base_bdevs_operational": 3, 00:14:29.929 "base_bdevs_list": [ 00:14:29.929 { 00:14:29.929 "name": "pt1", 00:14:29.929 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.929 "is_configured": true, 00:14:29.929 "data_offset": 2048, 00:14:29.929 "data_size": 63488 00:14:29.929 }, 00:14:29.929 { 00:14:29.929 "name": "pt2", 00:14:29.929 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.929 "is_configured": true, 00:14:29.929 "data_offset": 2048, 00:14:29.929 "data_size": 63488 00:14:29.929 }, 00:14:29.929 { 00:14:29.929 "name": "pt3", 00:14:29.929 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.929 "is_configured": true, 00:14:29.929 "data_offset": 2048, 00:14:29.929 "data_size": 63488 00:14:29.929 } 00:14:29.929 ] 00:14:29.929 }' 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.929 09:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.495 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:30.495 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:30.495 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:30.495 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:30.495 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:30.495 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:30.495 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:30.495 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:30.495 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.495 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.495 [2024-10-15 09:14:14.304128] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.495 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.495 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:30.495 "name": "raid_bdev1", 00:14:30.495 "aliases": [ 00:14:30.495 "32fbd20e-febc-4f2b-8e5e-370e955ddaa8" 00:14:30.495 ], 00:14:30.495 "product_name": "Raid Volume", 00:14:30.495 "block_size": 512, 00:14:30.495 "num_blocks": 190464, 00:14:30.495 "uuid": "32fbd20e-febc-4f2b-8e5e-370e955ddaa8", 00:14:30.495 "assigned_rate_limits": { 00:14:30.495 "rw_ios_per_sec": 0, 00:14:30.495 "rw_mbytes_per_sec": 0, 00:14:30.495 "r_mbytes_per_sec": 0, 00:14:30.495 "w_mbytes_per_sec": 0 00:14:30.495 }, 00:14:30.495 "claimed": false, 00:14:30.495 "zoned": false, 00:14:30.495 "supported_io_types": { 00:14:30.495 "read": true, 00:14:30.495 "write": true, 00:14:30.495 "unmap": true, 00:14:30.495 "flush": true, 00:14:30.495 "reset": true, 00:14:30.495 "nvme_admin": false, 00:14:30.495 "nvme_io": false, 00:14:30.495 "nvme_io_md": false, 00:14:30.495 "write_zeroes": true, 00:14:30.495 "zcopy": false, 00:14:30.495 "get_zone_info": false, 00:14:30.495 "zone_management": false, 00:14:30.495 "zone_append": false, 00:14:30.495 "compare": false, 00:14:30.495 "compare_and_write": false, 00:14:30.495 "abort": false, 00:14:30.495 "seek_hole": false, 00:14:30.495 "seek_data": false, 00:14:30.495 "copy": false, 00:14:30.495 "nvme_iov_md": false 00:14:30.495 }, 00:14:30.495 "memory_domains": [ 00:14:30.495 { 00:14:30.495 "dma_device_id": "system", 00:14:30.495 "dma_device_type": 1 00:14:30.495 }, 00:14:30.495 { 00:14:30.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.495 "dma_device_type": 2 00:14:30.495 }, 00:14:30.495 { 00:14:30.495 "dma_device_id": "system", 00:14:30.495 "dma_device_type": 1 00:14:30.495 }, 00:14:30.495 { 00:14:30.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.495 "dma_device_type": 2 00:14:30.495 }, 00:14:30.495 { 00:14:30.495 "dma_device_id": "system", 00:14:30.495 "dma_device_type": 1 00:14:30.495 }, 00:14:30.495 { 00:14:30.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.495 "dma_device_type": 2 00:14:30.495 } 00:14:30.495 ], 00:14:30.495 "driver_specific": { 00:14:30.495 "raid": { 00:14:30.495 "uuid": "32fbd20e-febc-4f2b-8e5e-370e955ddaa8", 00:14:30.495 "strip_size_kb": 64, 00:14:30.495 "state": "online", 00:14:30.495 "raid_level": "raid0", 00:14:30.495 "superblock": true, 00:14:30.495 "num_base_bdevs": 3, 00:14:30.495 "num_base_bdevs_discovered": 3, 00:14:30.495 "num_base_bdevs_operational": 3, 00:14:30.495 "base_bdevs_list": [ 00:14:30.495 { 00:14:30.495 "name": "pt1", 00:14:30.495 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:30.495 "is_configured": true, 00:14:30.495 "data_offset": 2048, 00:14:30.495 "data_size": 63488 00:14:30.495 }, 00:14:30.495 { 00:14:30.495 "name": "pt2", 00:14:30.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.495 "is_configured": true, 00:14:30.495 "data_offset": 2048, 00:14:30.495 "data_size": 63488 00:14:30.496 }, 00:14:30.496 { 00:14:30.496 "name": "pt3", 00:14:30.496 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.496 "is_configured": true, 00:14:30.496 "data_offset": 2048, 00:14:30.496 "data_size": 63488 00:14:30.496 } 00:14:30.496 ] 00:14:30.496 } 00:14:30.496 } 00:14:30.496 }' 00:14:30.496 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:30.496 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:30.496 pt2 00:14:30.496 pt3' 00:14:30.496 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.754 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.755 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.755 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.755 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:30.755 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.755 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.755 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:30.755 [2024-10-15 09:14:14.636095] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.755 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.077 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=32fbd20e-febc-4f2b-8e5e-370e955ddaa8 00:14:31.077 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 32fbd20e-febc-4f2b-8e5e-370e955ddaa8 ']' 00:14:31.077 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.077 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.077 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.077 [2024-10-15 09:14:14.687748] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.077 [2024-10-15 09:14:14.687921] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.077 [2024-10-15 09:14:14.688177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.077 [2024-10-15 09:14:14.688375] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.077 [2024-10-15 09:14:14.688489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:31.077 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.077 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.077 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:31.077 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.077 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.077 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.077 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:31.077 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:31.077 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:31.077 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:31.077 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.077 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.078 [2024-10-15 09:14:14.835888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:31.078 [2024-10-15 09:14:14.838731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:31.078 [2024-10-15 09:14:14.838809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:31.078 [2024-10-15 09:14:14.838891] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:31.078 [2024-10-15 09:14:14.838971] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:31.078 [2024-10-15 09:14:14.839006] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:31.078 [2024-10-15 09:14:14.839034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.078 [2024-10-15 09:14:14.839049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:31.078 request: 00:14:31.078 { 00:14:31.078 "name": "raid_bdev1", 00:14:31.078 "raid_level": "raid0", 00:14:31.078 "base_bdevs": [ 00:14:31.078 "malloc1", 00:14:31.078 "malloc2", 00:14:31.078 "malloc3" 00:14:31.078 ], 00:14:31.078 "strip_size_kb": 64, 00:14:31.078 "superblock": false, 00:14:31.078 "method": "bdev_raid_create", 00:14:31.078 "req_id": 1 00:14:31.078 } 00:14:31.078 Got JSON-RPC error response 00:14:31.078 response: 00:14:31.078 { 00:14:31.078 "code": -17, 00:14:31.078 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:31.078 } 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.078 [2024-10-15 09:14:14.907973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:31.078 [2024-10-15 09:14:14.908061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.078 [2024-10-15 09:14:14.908095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:31.078 [2024-10-15 09:14:14.908111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.078 [2024-10-15 09:14:14.911255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.078 [2024-10-15 09:14:14.911298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:31.078 [2024-10-15 09:14:14.911427] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:31.078 [2024-10-15 09:14:14.911502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:31.078 pt1 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.078 "name": "raid_bdev1", 00:14:31.078 "uuid": "32fbd20e-febc-4f2b-8e5e-370e955ddaa8", 00:14:31.078 "strip_size_kb": 64, 00:14:31.078 "state": "configuring", 00:14:31.078 "raid_level": "raid0", 00:14:31.078 "superblock": true, 00:14:31.078 "num_base_bdevs": 3, 00:14:31.078 "num_base_bdevs_discovered": 1, 00:14:31.078 "num_base_bdevs_operational": 3, 00:14:31.078 "base_bdevs_list": [ 00:14:31.078 { 00:14:31.078 "name": "pt1", 00:14:31.078 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:31.078 "is_configured": true, 00:14:31.078 "data_offset": 2048, 00:14:31.078 "data_size": 63488 00:14:31.078 }, 00:14:31.078 { 00:14:31.078 "name": null, 00:14:31.078 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.078 "is_configured": false, 00:14:31.078 "data_offset": 2048, 00:14:31.078 "data_size": 63488 00:14:31.078 }, 00:14:31.078 { 00:14:31.078 "name": null, 00:14:31.078 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:31.078 "is_configured": false, 00:14:31.078 "data_offset": 2048, 00:14:31.078 "data_size": 63488 00:14:31.078 } 00:14:31.078 ] 00:14:31.078 }' 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.078 09:14:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.646 [2024-10-15 09:14:15.456164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:31.646 [2024-10-15 09:14:15.456381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.646 [2024-10-15 09:14:15.456430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:31.646 [2024-10-15 09:14:15.456448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.646 [2024-10-15 09:14:15.457064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.646 [2024-10-15 09:14:15.457096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:31.646 [2024-10-15 09:14:15.457241] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:31.646 [2024-10-15 09:14:15.457275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:31.646 pt2 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.646 [2024-10-15 09:14:15.464153] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.646 "name": "raid_bdev1", 00:14:31.646 "uuid": "32fbd20e-febc-4f2b-8e5e-370e955ddaa8", 00:14:31.646 "strip_size_kb": 64, 00:14:31.646 "state": "configuring", 00:14:31.646 "raid_level": "raid0", 00:14:31.646 "superblock": true, 00:14:31.646 "num_base_bdevs": 3, 00:14:31.646 "num_base_bdevs_discovered": 1, 00:14:31.646 "num_base_bdevs_operational": 3, 00:14:31.646 "base_bdevs_list": [ 00:14:31.646 { 00:14:31.646 "name": "pt1", 00:14:31.646 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:31.646 "is_configured": true, 00:14:31.646 "data_offset": 2048, 00:14:31.646 "data_size": 63488 00:14:31.646 }, 00:14:31.646 { 00:14:31.646 "name": null, 00:14:31.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.646 "is_configured": false, 00:14:31.646 "data_offset": 0, 00:14:31.646 "data_size": 63488 00:14:31.646 }, 00:14:31.646 { 00:14:31.646 "name": null, 00:14:31.646 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:31.646 "is_configured": false, 00:14:31.646 "data_offset": 2048, 00:14:31.646 "data_size": 63488 00:14:31.646 } 00:14:31.646 ] 00:14:31.646 }' 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.646 09:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.214 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:32.214 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:32.214 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:32.214 09:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.214 09:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.214 [2024-10-15 09:14:15.988289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:32.214 [2024-10-15 09:14:15.988529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.214 [2024-10-15 09:14:15.988602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:32.214 [2024-10-15 09:14:15.988729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.214 [2024-10-15 09:14:15.989399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.214 [2024-10-15 09:14:15.989431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:32.214 [2024-10-15 09:14:15.989549] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:32.214 [2024-10-15 09:14:15.989588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:32.214 pt2 00:14:32.214 09:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.214 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:32.214 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:32.214 09:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:32.214 09:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.214 09:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.214 [2024-10-15 09:14:15.996269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:32.214 [2024-10-15 09:14:15.996454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.214 [2024-10-15 09:14:15.996519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:32.214 [2024-10-15 09:14:15.996761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.214 [2024-10-15 09:14:15.997336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.214 [2024-10-15 09:14:15.997489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:32.214 [2024-10-15 09:14:15.997688] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:32.214 [2024-10-15 09:14:15.997830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:32.214 [2024-10-15 09:14:15.998058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:32.214 [2024-10-15 09:14:15.998193] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:32.214 [2024-10-15 09:14:15.998587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:32.214 [2024-10-15 09:14:15.998900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:32.214 [2024-10-15 09:14:15.999016] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:32.214 [2024-10-15 09:14:15.999338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.214 pt3 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.214 "name": "raid_bdev1", 00:14:32.214 "uuid": "32fbd20e-febc-4f2b-8e5e-370e955ddaa8", 00:14:32.214 "strip_size_kb": 64, 00:14:32.214 "state": "online", 00:14:32.214 "raid_level": "raid0", 00:14:32.214 "superblock": true, 00:14:32.214 "num_base_bdevs": 3, 00:14:32.214 "num_base_bdevs_discovered": 3, 00:14:32.214 "num_base_bdevs_operational": 3, 00:14:32.214 "base_bdevs_list": [ 00:14:32.214 { 00:14:32.214 "name": "pt1", 00:14:32.214 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:32.214 "is_configured": true, 00:14:32.214 "data_offset": 2048, 00:14:32.214 "data_size": 63488 00:14:32.214 }, 00:14:32.214 { 00:14:32.214 "name": "pt2", 00:14:32.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:32.214 "is_configured": true, 00:14:32.214 "data_offset": 2048, 00:14:32.214 "data_size": 63488 00:14:32.214 }, 00:14:32.214 { 00:14:32.214 "name": "pt3", 00:14:32.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:32.214 "is_configured": true, 00:14:32.214 "data_offset": 2048, 00:14:32.214 "data_size": 63488 00:14:32.214 } 00:14:32.214 ] 00:14:32.214 }' 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.214 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.781 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:32.781 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:32.781 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:32.781 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:32.781 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:32.781 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:32.781 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:32.781 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.781 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.781 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:32.781 [2024-10-15 09:14:16.524870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:32.781 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.781 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:32.781 "name": "raid_bdev1", 00:14:32.781 "aliases": [ 00:14:32.781 "32fbd20e-febc-4f2b-8e5e-370e955ddaa8" 00:14:32.781 ], 00:14:32.781 "product_name": "Raid Volume", 00:14:32.781 "block_size": 512, 00:14:32.781 "num_blocks": 190464, 00:14:32.781 "uuid": "32fbd20e-febc-4f2b-8e5e-370e955ddaa8", 00:14:32.781 "assigned_rate_limits": { 00:14:32.781 "rw_ios_per_sec": 0, 00:14:32.781 "rw_mbytes_per_sec": 0, 00:14:32.781 "r_mbytes_per_sec": 0, 00:14:32.781 "w_mbytes_per_sec": 0 00:14:32.781 }, 00:14:32.781 "claimed": false, 00:14:32.781 "zoned": false, 00:14:32.781 "supported_io_types": { 00:14:32.781 "read": true, 00:14:32.782 "write": true, 00:14:32.782 "unmap": true, 00:14:32.782 "flush": true, 00:14:32.782 "reset": true, 00:14:32.782 "nvme_admin": false, 00:14:32.782 "nvme_io": false, 00:14:32.782 "nvme_io_md": false, 00:14:32.782 "write_zeroes": true, 00:14:32.782 "zcopy": false, 00:14:32.782 "get_zone_info": false, 00:14:32.782 "zone_management": false, 00:14:32.782 "zone_append": false, 00:14:32.782 "compare": false, 00:14:32.782 "compare_and_write": false, 00:14:32.782 "abort": false, 00:14:32.782 "seek_hole": false, 00:14:32.782 "seek_data": false, 00:14:32.782 "copy": false, 00:14:32.782 "nvme_iov_md": false 00:14:32.782 }, 00:14:32.782 "memory_domains": [ 00:14:32.782 { 00:14:32.782 "dma_device_id": "system", 00:14:32.782 "dma_device_type": 1 00:14:32.782 }, 00:14:32.782 { 00:14:32.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.782 "dma_device_type": 2 00:14:32.782 }, 00:14:32.782 { 00:14:32.782 "dma_device_id": "system", 00:14:32.782 "dma_device_type": 1 00:14:32.782 }, 00:14:32.782 { 00:14:32.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.782 "dma_device_type": 2 00:14:32.782 }, 00:14:32.782 { 00:14:32.782 "dma_device_id": "system", 00:14:32.782 "dma_device_type": 1 00:14:32.782 }, 00:14:32.782 { 00:14:32.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.782 "dma_device_type": 2 00:14:32.782 } 00:14:32.782 ], 00:14:32.782 "driver_specific": { 00:14:32.782 "raid": { 00:14:32.782 "uuid": "32fbd20e-febc-4f2b-8e5e-370e955ddaa8", 00:14:32.782 "strip_size_kb": 64, 00:14:32.782 "state": "online", 00:14:32.782 "raid_level": "raid0", 00:14:32.782 "superblock": true, 00:14:32.782 "num_base_bdevs": 3, 00:14:32.782 "num_base_bdevs_discovered": 3, 00:14:32.782 "num_base_bdevs_operational": 3, 00:14:32.782 "base_bdevs_list": [ 00:14:32.782 { 00:14:32.782 "name": "pt1", 00:14:32.782 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:32.782 "is_configured": true, 00:14:32.782 "data_offset": 2048, 00:14:32.782 "data_size": 63488 00:14:32.782 }, 00:14:32.782 { 00:14:32.782 "name": "pt2", 00:14:32.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:32.782 "is_configured": true, 00:14:32.782 "data_offset": 2048, 00:14:32.782 "data_size": 63488 00:14:32.782 }, 00:14:32.782 { 00:14:32.782 "name": "pt3", 00:14:32.782 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:32.782 "is_configured": true, 00:14:32.782 "data_offset": 2048, 00:14:32.782 "data_size": 63488 00:14:32.782 } 00:14:32.782 ] 00:14:32.782 } 00:14:32.782 } 00:14:32.782 }' 00:14:32.782 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:32.782 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:32.782 pt2 00:14:32.782 pt3' 00:14:32.782 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.782 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:32.782 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.782 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:32.782 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.782 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.782 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.782 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.040 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.040 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.040 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.040 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.040 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:33.041 [2024-10-15 09:14:16.872907] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 32fbd20e-febc-4f2b-8e5e-370e955ddaa8 '!=' 32fbd20e-febc-4f2b-8e5e-370e955ddaa8 ']' 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65273 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 65273 ']' 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 65273 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65273 00:14:33.041 killing process with pid 65273 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65273' 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 65273 00:14:33.041 [2024-10-15 09:14:16.953048] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.041 09:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 65273 00:14:33.041 [2024-10-15 09:14:16.953222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.041 [2024-10-15 09:14:16.953312] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.041 [2024-10-15 09:14:16.953332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:33.607 [2024-10-15 09:14:17.247443] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.556 09:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:34.556 00:14:34.556 real 0m5.877s 00:14:34.556 user 0m8.783s 00:14:34.556 sys 0m0.901s 00:14:34.556 09:14:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:34.556 ************************************ 00:14:34.556 END TEST raid_superblock_test 00:14:34.556 ************************************ 00:14:34.556 09:14:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.556 09:14:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:14:34.556 09:14:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:34.556 09:14:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:34.556 09:14:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.556 ************************************ 00:14:34.556 START TEST raid_read_error_test 00:14:34.556 ************************************ 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7vaXZgLtMc 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65532 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65532 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 65532 ']' 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:34.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:34.556 09:14:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.828 [2024-10-15 09:14:18.535226] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:14:34.828 [2024-10-15 09:14:18.535429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65532 ] 00:14:34.828 [2024-10-15 09:14:18.709737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.087 [2024-10-15 09:14:18.855520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.345 [2024-10-15 09:14:19.078564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.345 [2024-10-15 09:14:19.078637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.604 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:35.604 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:35.604 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:35.604 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:35.604 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.604 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.863 BaseBdev1_malloc 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.863 true 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.863 [2024-10-15 09:14:19.552694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:35.863 [2024-10-15 09:14:19.552771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.863 [2024-10-15 09:14:19.552804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:35.863 [2024-10-15 09:14:19.552824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.863 [2024-10-15 09:14:19.555872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.863 [2024-10-15 09:14:19.555926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:35.863 BaseBdev1 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.863 BaseBdev2_malloc 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.863 true 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.863 [2024-10-15 09:14:19.616287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:35.863 [2024-10-15 09:14:19.616498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.863 [2024-10-15 09:14:19.616539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:35.863 [2024-10-15 09:14:19.616558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.863 [2024-10-15 09:14:19.619573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.863 [2024-10-15 09:14:19.619623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:35.863 BaseBdev2 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.863 BaseBdev3_malloc 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.863 true 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.863 [2024-10-15 09:14:19.687388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:35.863 [2024-10-15 09:14:19.687610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.863 [2024-10-15 09:14:19.687650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:35.863 [2024-10-15 09:14:19.687670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.863 [2024-10-15 09:14:19.690657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.863 [2024-10-15 09:14:19.690829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:35.863 BaseBdev3 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.863 [2024-10-15 09:14:19.695537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.863 [2024-10-15 09:14:19.698157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:35.863 [2024-10-15 09:14:19.698283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:35.863 [2024-10-15 09:14:19.698569] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:35.863 [2024-10-15 09:14:19.698592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:35.863 [2024-10-15 09:14:19.698934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:35.863 [2024-10-15 09:14:19.699187] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:35.863 [2024-10-15 09:14:19.699212] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:35.863 [2024-10-15 09:14:19.699457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.863 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.864 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.864 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.864 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.864 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.864 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.864 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.864 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.864 "name": "raid_bdev1", 00:14:35.864 "uuid": "9e8aad36-3364-4770-b80b-b1e954b1de4d", 00:14:35.864 "strip_size_kb": 64, 00:14:35.864 "state": "online", 00:14:35.864 "raid_level": "raid0", 00:14:35.864 "superblock": true, 00:14:35.864 "num_base_bdevs": 3, 00:14:35.864 "num_base_bdevs_discovered": 3, 00:14:35.864 "num_base_bdevs_operational": 3, 00:14:35.864 "base_bdevs_list": [ 00:14:35.864 { 00:14:35.864 "name": "BaseBdev1", 00:14:35.864 "uuid": "de21ff97-6d52-53c0-878b-4836545c9675", 00:14:35.864 "is_configured": true, 00:14:35.864 "data_offset": 2048, 00:14:35.864 "data_size": 63488 00:14:35.864 }, 00:14:35.864 { 00:14:35.864 "name": "BaseBdev2", 00:14:35.864 "uuid": "578172ef-1214-525a-845e-83b02367851f", 00:14:35.864 "is_configured": true, 00:14:35.864 "data_offset": 2048, 00:14:35.864 "data_size": 63488 00:14:35.864 }, 00:14:35.864 { 00:14:35.864 "name": "BaseBdev3", 00:14:35.864 "uuid": "9c9a2e3b-8e08-5a95-a6f0-d0e32c23d0ec", 00:14:35.864 "is_configured": true, 00:14:35.864 "data_offset": 2048, 00:14:35.864 "data_size": 63488 00:14:35.864 } 00:14:35.864 ] 00:14:35.864 }' 00:14:35.864 09:14:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.864 09:14:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.431 09:14:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:36.431 09:14:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:36.431 [2024-10-15 09:14:20.349247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.368 "name": "raid_bdev1", 00:14:37.368 "uuid": "9e8aad36-3364-4770-b80b-b1e954b1de4d", 00:14:37.368 "strip_size_kb": 64, 00:14:37.368 "state": "online", 00:14:37.368 "raid_level": "raid0", 00:14:37.368 "superblock": true, 00:14:37.368 "num_base_bdevs": 3, 00:14:37.368 "num_base_bdevs_discovered": 3, 00:14:37.368 "num_base_bdevs_operational": 3, 00:14:37.368 "base_bdevs_list": [ 00:14:37.368 { 00:14:37.368 "name": "BaseBdev1", 00:14:37.368 "uuid": "de21ff97-6d52-53c0-878b-4836545c9675", 00:14:37.368 "is_configured": true, 00:14:37.368 "data_offset": 2048, 00:14:37.368 "data_size": 63488 00:14:37.368 }, 00:14:37.368 { 00:14:37.368 "name": "BaseBdev2", 00:14:37.368 "uuid": "578172ef-1214-525a-845e-83b02367851f", 00:14:37.368 "is_configured": true, 00:14:37.368 "data_offset": 2048, 00:14:37.368 "data_size": 63488 00:14:37.368 }, 00:14:37.368 { 00:14:37.368 "name": "BaseBdev3", 00:14:37.368 "uuid": "9c9a2e3b-8e08-5a95-a6f0-d0e32c23d0ec", 00:14:37.368 "is_configured": true, 00:14:37.368 "data_offset": 2048, 00:14:37.368 "data_size": 63488 00:14:37.368 } 00:14:37.368 ] 00:14:37.368 }' 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.368 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.935 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:37.935 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.935 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.935 [2024-10-15 09:14:21.756608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.935 [2024-10-15 09:14:21.756816] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.935 [2024-10-15 09:14:21.760393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.935 [2024-10-15 09:14:21.760603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.935 [2024-10-15 09:14:21.760713] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:14:37.935 "results": [ 00:14:37.935 { 00:14:37.935 "job": "raid_bdev1", 00:14:37.935 "core_mask": "0x1", 00:14:37.935 "workload": "randrw", 00:14:37.935 "percentage": 50, 00:14:37.935 "status": "finished", 00:14:37.935 "queue_depth": 1, 00:14:37.935 "io_size": 131072, 00:14:37.935 "runtime": 1.404772, 00:14:37.935 "iops": 9886.301834034277, 00:14:37.935 "mibps": 1235.7877292542846, 00:14:37.935 "io_failed": 1, 00:14:37.935 "io_timeout": 0, 00:14:37.935 "avg_latency_us": 142.57978805987733, 00:14:37.935 "min_latency_us": 38.63272727272727, 00:14:37.935 "max_latency_us": 1876.7127272727273 00:14:37.935 } 00:14:37.935 ], 00:14:37.935 "core_count": 1 00:14:37.935 } 00:14:37.935 ee all in destruct 00:14:37.935 [2024-10-15 09:14:21.760845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:37.935 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.935 09:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65532 00:14:37.935 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 65532 ']' 00:14:37.935 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 65532 00:14:37.935 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:14:37.935 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:37.935 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65532 00:14:37.935 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:37.935 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:37.935 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65532' 00:14:37.935 killing process with pid 65532 00:14:37.935 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 65532 00:14:37.935 [2024-10-15 09:14:21.801257] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:37.935 09:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 65532 00:14:38.193 [2024-10-15 09:14:22.033864] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:39.580 09:14:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7vaXZgLtMc 00:14:39.580 09:14:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:39.580 09:14:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:39.580 09:14:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:14:39.580 09:14:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:39.580 ************************************ 00:14:39.580 END TEST raid_read_error_test 00:14:39.580 ************************************ 00:14:39.580 09:14:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:39.580 09:14:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:39.580 09:14:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:14:39.580 00:14:39.580 real 0m4.838s 00:14:39.580 user 0m5.885s 00:14:39.580 sys 0m0.668s 00:14:39.580 09:14:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:39.580 09:14:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.580 09:14:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:14:39.580 09:14:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:39.580 09:14:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:39.580 09:14:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:39.580 ************************************ 00:14:39.580 START TEST raid_write_error_test 00:14:39.580 ************************************ 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.demWPzsR4K 00:14:39.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65683 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65683 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 65683 ']' 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:39.580 09:14:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.580 [2024-10-15 09:14:23.419380] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:14:39.580 [2024-10-15 09:14:23.419586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65683 ] 00:14:39.839 [2024-10-15 09:14:23.599185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.097 [2024-10-15 09:14:23.772371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.097 [2024-10-15 09:14:23.998470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.097 [2024-10-15 09:14:23.998798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.661 BaseBdev1_malloc 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.661 true 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.661 [2024-10-15 09:14:24.551860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:40.661 [2024-10-15 09:14:24.551939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.661 [2024-10-15 09:14:24.551973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:40.661 [2024-10-15 09:14:24.551994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.661 [2024-10-15 09:14:24.555000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.661 [2024-10-15 09:14:24.555206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:40.661 BaseBdev1 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.661 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.920 BaseBdev2_malloc 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.920 true 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.920 [2024-10-15 09:14:24.615689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:40.920 [2024-10-15 09:14:24.615896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.920 [2024-10-15 09:14:24.615936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:40.920 [2024-10-15 09:14:24.615955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.920 [2024-10-15 09:14:24.618999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.920 [2024-10-15 09:14:24.619208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:40.920 BaseBdev2 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.920 BaseBdev3_malloc 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.920 true 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.920 [2024-10-15 09:14:24.696561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:40.920 [2024-10-15 09:14:24.696639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.920 [2024-10-15 09:14:24.696671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:40.920 [2024-10-15 09:14:24.696690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.920 [2024-10-15 09:14:24.699733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.920 [2024-10-15 09:14:24.699786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:40.920 BaseBdev3 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.920 [2024-10-15 09:14:24.704670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:40.920 [2024-10-15 09:14:24.707441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.920 [2024-10-15 09:14:24.707696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:40.920 [2024-10-15 09:14:24.708024] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:40.920 [2024-10-15 09:14:24.708187] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:40.920 [2024-10-15 09:14:24.708715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:40.920 [2024-10-15 09:14:24.708954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:40.920 [2024-10-15 09:14:24.708977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:40.920 [2024-10-15 09:14:24.709244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.920 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.920 "name": "raid_bdev1", 00:14:40.920 "uuid": "dc4ab4a1-b40d-4ffc-9045-593f2254a202", 00:14:40.920 "strip_size_kb": 64, 00:14:40.920 "state": "online", 00:14:40.920 "raid_level": "raid0", 00:14:40.920 "superblock": true, 00:14:40.920 "num_base_bdevs": 3, 00:14:40.920 "num_base_bdevs_discovered": 3, 00:14:40.920 "num_base_bdevs_operational": 3, 00:14:40.920 "base_bdevs_list": [ 00:14:40.920 { 00:14:40.920 "name": "BaseBdev1", 00:14:40.920 "uuid": "f7aa5806-1503-5d16-bfb0-5ed197042796", 00:14:40.920 "is_configured": true, 00:14:40.920 "data_offset": 2048, 00:14:40.920 "data_size": 63488 00:14:40.921 }, 00:14:40.921 { 00:14:40.921 "name": "BaseBdev2", 00:14:40.921 "uuid": "6f58fd39-9710-5bfa-854d-a932223a11ee", 00:14:40.921 "is_configured": true, 00:14:40.921 "data_offset": 2048, 00:14:40.921 "data_size": 63488 00:14:40.921 }, 00:14:40.921 { 00:14:40.921 "name": "BaseBdev3", 00:14:40.921 "uuid": "66f87459-e470-59b6-94f7-4c84264f1ab3", 00:14:40.921 "is_configured": true, 00:14:40.921 "data_offset": 2048, 00:14:40.921 "data_size": 63488 00:14:40.921 } 00:14:40.921 ] 00:14:40.921 }' 00:14:40.921 09:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.921 09:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.488 09:14:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:41.488 09:14:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:41.488 [2024-10-15 09:14:25.366894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.425 "name": "raid_bdev1", 00:14:42.425 "uuid": "dc4ab4a1-b40d-4ffc-9045-593f2254a202", 00:14:42.425 "strip_size_kb": 64, 00:14:42.425 "state": "online", 00:14:42.425 "raid_level": "raid0", 00:14:42.425 "superblock": true, 00:14:42.425 "num_base_bdevs": 3, 00:14:42.425 "num_base_bdevs_discovered": 3, 00:14:42.425 "num_base_bdevs_operational": 3, 00:14:42.425 "base_bdevs_list": [ 00:14:42.425 { 00:14:42.425 "name": "BaseBdev1", 00:14:42.425 "uuid": "f7aa5806-1503-5d16-bfb0-5ed197042796", 00:14:42.425 "is_configured": true, 00:14:42.425 "data_offset": 2048, 00:14:42.425 "data_size": 63488 00:14:42.425 }, 00:14:42.425 { 00:14:42.425 "name": "BaseBdev2", 00:14:42.425 "uuid": "6f58fd39-9710-5bfa-854d-a932223a11ee", 00:14:42.425 "is_configured": true, 00:14:42.425 "data_offset": 2048, 00:14:42.425 "data_size": 63488 00:14:42.425 }, 00:14:42.425 { 00:14:42.425 "name": "BaseBdev3", 00:14:42.425 "uuid": "66f87459-e470-59b6-94f7-4c84264f1ab3", 00:14:42.425 "is_configured": true, 00:14:42.425 "data_offset": 2048, 00:14:42.425 "data_size": 63488 00:14:42.425 } 00:14:42.425 ] 00:14:42.425 }' 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.425 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.993 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:42.993 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.993 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.993 [2024-10-15 09:14:26.769625] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:42.993 [2024-10-15 09:14:26.769663] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:42.993 [2024-10-15 09:14:26.773552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.993 [2024-10-15 09:14:26.773829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.993 [2024-10-15 09:14:26.774036] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.993 [2024-10-15 09:14:26.774216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:42.993 { 00:14:42.993 "results": [ 00:14:42.993 { 00:14:42.993 "job": "raid_bdev1", 00:14:42.993 "core_mask": "0x1", 00:14:42.993 "workload": "randrw", 00:14:42.993 "percentage": 50, 00:14:42.993 "status": "finished", 00:14:42.993 "queue_depth": 1, 00:14:42.993 "io_size": 131072, 00:14:42.993 "runtime": 1.399902, 00:14:42.993 "iops": 9609.958411374511, 00:14:42.993 "mibps": 1201.244801421814, 00:14:42.993 "io_failed": 1, 00:14:42.993 "io_timeout": 0, 00:14:42.993 "avg_latency_us": 146.5638834006784, 00:14:42.993 "min_latency_us": 41.658181818181816, 00:14:42.993 "max_latency_us": 1980.9745454545455 00:14:42.993 } 00:14:42.993 ], 00:14:42.993 "core_count": 1 00:14:42.993 } 00:14:42.993 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.993 09:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65683 00:14:42.993 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 65683 ']' 00:14:42.993 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 65683 00:14:42.993 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:14:42.993 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:42.993 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65683 00:14:42.993 killing process with pid 65683 00:14:42.993 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:42.993 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:42.993 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65683' 00:14:42.993 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 65683 00:14:42.993 09:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 65683 00:14:42.993 [2024-10-15 09:14:26.812287] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:43.253 [2024-10-15 09:14:27.039608] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:44.628 09:14:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.demWPzsR4K 00:14:44.628 09:14:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:44.628 09:14:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:44.628 09:14:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:14:44.628 09:14:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:44.628 09:14:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:44.628 09:14:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:44.628 ************************************ 00:14:44.628 END TEST raid_write_error_test 00:14:44.628 ************************************ 00:14:44.628 09:14:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:14:44.628 00:14:44.628 real 0m4.940s 00:14:44.628 user 0m6.097s 00:14:44.628 sys 0m0.664s 00:14:44.628 09:14:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:44.628 09:14:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.628 09:14:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:44.628 09:14:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:14:44.628 09:14:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:44.628 09:14:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:44.628 09:14:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:44.628 ************************************ 00:14:44.628 START TEST raid_state_function_test 00:14:44.628 ************************************ 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:44.628 Process raid pid: 65827 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65827 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65827' 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65827 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 65827 ']' 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:44.628 09:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.628 [2024-10-15 09:14:28.406110] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:14:44.628 [2024-10-15 09:14:28.406591] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.887 [2024-10-15 09:14:28.586774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.887 [2024-10-15 09:14:28.738136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.146 [2024-10-15 09:14:28.967385] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.146 [2024-10-15 09:14:28.967668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.714 [2024-10-15 09:14:29.431031] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.714 [2024-10-15 09:14:29.431281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.714 [2024-10-15 09:14:29.431462] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.714 [2024-10-15 09:14:29.431654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.714 [2024-10-15 09:14:29.431696] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:45.714 [2024-10-15 09:14:29.431729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.714 "name": "Existed_Raid", 00:14:45.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.714 "strip_size_kb": 64, 00:14:45.714 "state": "configuring", 00:14:45.714 "raid_level": "concat", 00:14:45.714 "superblock": false, 00:14:45.714 "num_base_bdevs": 3, 00:14:45.714 "num_base_bdevs_discovered": 0, 00:14:45.714 "num_base_bdevs_operational": 3, 00:14:45.714 "base_bdevs_list": [ 00:14:45.714 { 00:14:45.714 "name": "BaseBdev1", 00:14:45.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.714 "is_configured": false, 00:14:45.714 "data_offset": 0, 00:14:45.714 "data_size": 0 00:14:45.714 }, 00:14:45.714 { 00:14:45.714 "name": "BaseBdev2", 00:14:45.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.714 "is_configured": false, 00:14:45.714 "data_offset": 0, 00:14:45.714 "data_size": 0 00:14:45.714 }, 00:14:45.714 { 00:14:45.714 "name": "BaseBdev3", 00:14:45.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.714 "is_configured": false, 00:14:45.714 "data_offset": 0, 00:14:45.714 "data_size": 0 00:14:45.714 } 00:14:45.714 ] 00:14:45.714 }' 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.714 09:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.283 09:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:46.283 09:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.283 09:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.283 [2024-10-15 09:14:29.963069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:46.283 [2024-10-15 09:14:29.963145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:46.283 09:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.283 09:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:46.283 09:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.283 09:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.283 [2024-10-15 09:14:29.971062] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:46.283 [2024-10-15 09:14:29.971284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:46.283 [2024-10-15 09:14:29.971435] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:46.283 [2024-10-15 09:14:29.971472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:46.283 [2024-10-15 09:14:29.971486] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:46.283 [2024-10-15 09:14:29.971508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:46.283 09:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.283 09:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:46.283 09:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.283 09:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.283 [2024-10-15 09:14:30.019982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.283 BaseBdev1 00:14:46.283 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.283 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:46.283 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:46.283 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:46.283 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:46.283 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:46.283 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:46.283 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:46.283 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.283 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.283 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.283 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:46.283 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.283 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.283 [ 00:14:46.283 { 00:14:46.283 "name": "BaseBdev1", 00:14:46.283 "aliases": [ 00:14:46.283 "76fb9801-9c4b-4ecf-abf1-16eaa9d5667c" 00:14:46.283 ], 00:14:46.283 "product_name": "Malloc disk", 00:14:46.283 "block_size": 512, 00:14:46.283 "num_blocks": 65536, 00:14:46.283 "uuid": "76fb9801-9c4b-4ecf-abf1-16eaa9d5667c", 00:14:46.283 "assigned_rate_limits": { 00:14:46.283 "rw_ios_per_sec": 0, 00:14:46.283 "rw_mbytes_per_sec": 0, 00:14:46.283 "r_mbytes_per_sec": 0, 00:14:46.283 "w_mbytes_per_sec": 0 00:14:46.283 }, 00:14:46.284 "claimed": true, 00:14:46.284 "claim_type": "exclusive_write", 00:14:46.284 "zoned": false, 00:14:46.284 "supported_io_types": { 00:14:46.284 "read": true, 00:14:46.284 "write": true, 00:14:46.284 "unmap": true, 00:14:46.284 "flush": true, 00:14:46.284 "reset": true, 00:14:46.284 "nvme_admin": false, 00:14:46.284 "nvme_io": false, 00:14:46.284 "nvme_io_md": false, 00:14:46.284 "write_zeroes": true, 00:14:46.284 "zcopy": true, 00:14:46.284 "get_zone_info": false, 00:14:46.284 "zone_management": false, 00:14:46.284 "zone_append": false, 00:14:46.284 "compare": false, 00:14:46.284 "compare_and_write": false, 00:14:46.284 "abort": true, 00:14:46.284 "seek_hole": false, 00:14:46.284 "seek_data": false, 00:14:46.284 "copy": true, 00:14:46.284 "nvme_iov_md": false 00:14:46.284 }, 00:14:46.284 "memory_domains": [ 00:14:46.284 { 00:14:46.284 "dma_device_id": "system", 00:14:46.284 "dma_device_type": 1 00:14:46.284 }, 00:14:46.284 { 00:14:46.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.284 "dma_device_type": 2 00:14:46.284 } 00:14:46.284 ], 00:14:46.284 "driver_specific": {} 00:14:46.284 } 00:14:46.284 ] 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.284 "name": "Existed_Raid", 00:14:46.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.284 "strip_size_kb": 64, 00:14:46.284 "state": "configuring", 00:14:46.284 "raid_level": "concat", 00:14:46.284 "superblock": false, 00:14:46.284 "num_base_bdevs": 3, 00:14:46.284 "num_base_bdevs_discovered": 1, 00:14:46.284 "num_base_bdevs_operational": 3, 00:14:46.284 "base_bdevs_list": [ 00:14:46.284 { 00:14:46.284 "name": "BaseBdev1", 00:14:46.284 "uuid": "76fb9801-9c4b-4ecf-abf1-16eaa9d5667c", 00:14:46.284 "is_configured": true, 00:14:46.284 "data_offset": 0, 00:14:46.284 "data_size": 65536 00:14:46.284 }, 00:14:46.284 { 00:14:46.284 "name": "BaseBdev2", 00:14:46.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.284 "is_configured": false, 00:14:46.284 "data_offset": 0, 00:14:46.284 "data_size": 0 00:14:46.284 }, 00:14:46.284 { 00:14:46.284 "name": "BaseBdev3", 00:14:46.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.284 "is_configured": false, 00:14:46.284 "data_offset": 0, 00:14:46.284 "data_size": 0 00:14:46.284 } 00:14:46.284 ] 00:14:46.284 }' 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.284 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.871 [2024-10-15 09:14:30.592204] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:46.871 [2024-10-15 09:14:30.592423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.871 [2024-10-15 09:14:30.604259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.871 [2024-10-15 09:14:30.607065] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:46.871 [2024-10-15 09:14:30.607259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:46.871 [2024-10-15 09:14:30.607376] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:46.871 [2024-10-15 09:14:30.607537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.871 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.872 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.872 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.872 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.872 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.872 "name": "Existed_Raid", 00:14:46.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.872 "strip_size_kb": 64, 00:14:46.872 "state": "configuring", 00:14:46.872 "raid_level": "concat", 00:14:46.872 "superblock": false, 00:14:46.872 "num_base_bdevs": 3, 00:14:46.872 "num_base_bdevs_discovered": 1, 00:14:46.872 "num_base_bdevs_operational": 3, 00:14:46.872 "base_bdevs_list": [ 00:14:46.872 { 00:14:46.872 "name": "BaseBdev1", 00:14:46.872 "uuid": "76fb9801-9c4b-4ecf-abf1-16eaa9d5667c", 00:14:46.872 "is_configured": true, 00:14:46.872 "data_offset": 0, 00:14:46.872 "data_size": 65536 00:14:46.872 }, 00:14:46.872 { 00:14:46.872 "name": "BaseBdev2", 00:14:46.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.872 "is_configured": false, 00:14:46.872 "data_offset": 0, 00:14:46.872 "data_size": 0 00:14:46.872 }, 00:14:46.872 { 00:14:46.872 "name": "BaseBdev3", 00:14:46.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.872 "is_configured": false, 00:14:46.872 "data_offset": 0, 00:14:46.872 "data_size": 0 00:14:46.872 } 00:14:46.872 ] 00:14:46.872 }' 00:14:46.872 09:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.872 09:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.438 [2024-10-15 09:14:31.158399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:47.438 BaseBdev2 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.438 [ 00:14:47.438 { 00:14:47.438 "name": "BaseBdev2", 00:14:47.438 "aliases": [ 00:14:47.438 "413bb527-ceec-4a93-8894-438b9b5566e2" 00:14:47.438 ], 00:14:47.438 "product_name": "Malloc disk", 00:14:47.438 "block_size": 512, 00:14:47.438 "num_blocks": 65536, 00:14:47.438 "uuid": "413bb527-ceec-4a93-8894-438b9b5566e2", 00:14:47.438 "assigned_rate_limits": { 00:14:47.438 "rw_ios_per_sec": 0, 00:14:47.438 "rw_mbytes_per_sec": 0, 00:14:47.438 "r_mbytes_per_sec": 0, 00:14:47.438 "w_mbytes_per_sec": 0 00:14:47.438 }, 00:14:47.438 "claimed": true, 00:14:47.438 "claim_type": "exclusive_write", 00:14:47.438 "zoned": false, 00:14:47.438 "supported_io_types": { 00:14:47.438 "read": true, 00:14:47.438 "write": true, 00:14:47.438 "unmap": true, 00:14:47.438 "flush": true, 00:14:47.438 "reset": true, 00:14:47.438 "nvme_admin": false, 00:14:47.438 "nvme_io": false, 00:14:47.438 "nvme_io_md": false, 00:14:47.438 "write_zeroes": true, 00:14:47.438 "zcopy": true, 00:14:47.438 "get_zone_info": false, 00:14:47.438 "zone_management": false, 00:14:47.438 "zone_append": false, 00:14:47.438 "compare": false, 00:14:47.438 "compare_and_write": false, 00:14:47.438 "abort": true, 00:14:47.438 "seek_hole": false, 00:14:47.438 "seek_data": false, 00:14:47.438 "copy": true, 00:14:47.438 "nvme_iov_md": false 00:14:47.438 }, 00:14:47.438 "memory_domains": [ 00:14:47.438 { 00:14:47.438 "dma_device_id": "system", 00:14:47.438 "dma_device_type": 1 00:14:47.438 }, 00:14:47.438 { 00:14:47.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.438 "dma_device_type": 2 00:14:47.438 } 00:14:47.438 ], 00:14:47.438 "driver_specific": {} 00:14:47.438 } 00:14:47.438 ] 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.438 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.439 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:47.439 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.439 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.439 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.439 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.439 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.439 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.439 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.439 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.439 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.439 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.439 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.439 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.439 "name": "Existed_Raid", 00:14:47.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.439 "strip_size_kb": 64, 00:14:47.439 "state": "configuring", 00:14:47.439 "raid_level": "concat", 00:14:47.439 "superblock": false, 00:14:47.439 "num_base_bdevs": 3, 00:14:47.439 "num_base_bdevs_discovered": 2, 00:14:47.439 "num_base_bdevs_operational": 3, 00:14:47.439 "base_bdevs_list": [ 00:14:47.439 { 00:14:47.439 "name": "BaseBdev1", 00:14:47.439 "uuid": "76fb9801-9c4b-4ecf-abf1-16eaa9d5667c", 00:14:47.439 "is_configured": true, 00:14:47.439 "data_offset": 0, 00:14:47.439 "data_size": 65536 00:14:47.439 }, 00:14:47.439 { 00:14:47.439 "name": "BaseBdev2", 00:14:47.439 "uuid": "413bb527-ceec-4a93-8894-438b9b5566e2", 00:14:47.439 "is_configured": true, 00:14:47.439 "data_offset": 0, 00:14:47.439 "data_size": 65536 00:14:47.439 }, 00:14:47.439 { 00:14:47.439 "name": "BaseBdev3", 00:14:47.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.439 "is_configured": false, 00:14:47.439 "data_offset": 0, 00:14:47.439 "data_size": 0 00:14:47.439 } 00:14:47.439 ] 00:14:47.439 }' 00:14:47.439 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.439 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.007 [2024-10-15 09:14:31.754835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:48.007 [2024-10-15 09:14:31.754913] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:48.007 [2024-10-15 09:14:31.754935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:48.007 [2024-10-15 09:14:31.755343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:48.007 BaseBdev3 00:14:48.007 [2024-10-15 09:14:31.755580] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:48.007 [2024-10-15 09:14:31.755605] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:48.007 [2024-10-15 09:14:31.755967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.007 [ 00:14:48.007 { 00:14:48.007 "name": "BaseBdev3", 00:14:48.007 "aliases": [ 00:14:48.007 "5c612b0d-3da1-41ac-b064-acd2492d27ca" 00:14:48.007 ], 00:14:48.007 "product_name": "Malloc disk", 00:14:48.007 "block_size": 512, 00:14:48.007 "num_blocks": 65536, 00:14:48.007 "uuid": "5c612b0d-3da1-41ac-b064-acd2492d27ca", 00:14:48.007 "assigned_rate_limits": { 00:14:48.007 "rw_ios_per_sec": 0, 00:14:48.007 "rw_mbytes_per_sec": 0, 00:14:48.007 "r_mbytes_per_sec": 0, 00:14:48.007 "w_mbytes_per_sec": 0 00:14:48.007 }, 00:14:48.007 "claimed": true, 00:14:48.007 "claim_type": "exclusive_write", 00:14:48.007 "zoned": false, 00:14:48.007 "supported_io_types": { 00:14:48.007 "read": true, 00:14:48.007 "write": true, 00:14:48.007 "unmap": true, 00:14:48.007 "flush": true, 00:14:48.007 "reset": true, 00:14:48.007 "nvme_admin": false, 00:14:48.007 "nvme_io": false, 00:14:48.007 "nvme_io_md": false, 00:14:48.007 "write_zeroes": true, 00:14:48.007 "zcopy": true, 00:14:48.007 "get_zone_info": false, 00:14:48.007 "zone_management": false, 00:14:48.007 "zone_append": false, 00:14:48.007 "compare": false, 00:14:48.007 "compare_and_write": false, 00:14:48.007 "abort": true, 00:14:48.007 "seek_hole": false, 00:14:48.007 "seek_data": false, 00:14:48.007 "copy": true, 00:14:48.007 "nvme_iov_md": false 00:14:48.007 }, 00:14:48.007 "memory_domains": [ 00:14:48.007 { 00:14:48.007 "dma_device_id": "system", 00:14:48.007 "dma_device_type": 1 00:14:48.007 }, 00:14:48.007 { 00:14:48.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.007 "dma_device_type": 2 00:14:48.007 } 00:14:48.007 ], 00:14:48.007 "driver_specific": {} 00:14:48.007 } 00:14:48.007 ] 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.007 "name": "Existed_Raid", 00:14:48.007 "uuid": "f18ae6ec-b405-47ae-af25-41f1f8000103", 00:14:48.007 "strip_size_kb": 64, 00:14:48.007 "state": "online", 00:14:48.007 "raid_level": "concat", 00:14:48.007 "superblock": false, 00:14:48.007 "num_base_bdevs": 3, 00:14:48.007 "num_base_bdevs_discovered": 3, 00:14:48.007 "num_base_bdevs_operational": 3, 00:14:48.007 "base_bdevs_list": [ 00:14:48.007 { 00:14:48.007 "name": "BaseBdev1", 00:14:48.007 "uuid": "76fb9801-9c4b-4ecf-abf1-16eaa9d5667c", 00:14:48.007 "is_configured": true, 00:14:48.007 "data_offset": 0, 00:14:48.007 "data_size": 65536 00:14:48.007 }, 00:14:48.007 { 00:14:48.007 "name": "BaseBdev2", 00:14:48.007 "uuid": "413bb527-ceec-4a93-8894-438b9b5566e2", 00:14:48.007 "is_configured": true, 00:14:48.007 "data_offset": 0, 00:14:48.007 "data_size": 65536 00:14:48.007 }, 00:14:48.007 { 00:14:48.007 "name": "BaseBdev3", 00:14:48.007 "uuid": "5c612b0d-3da1-41ac-b064-acd2492d27ca", 00:14:48.007 "is_configured": true, 00:14:48.007 "data_offset": 0, 00:14:48.007 "data_size": 65536 00:14:48.007 } 00:14:48.007 ] 00:14:48.007 }' 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.007 09:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.574 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:48.574 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:48.574 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:48.574 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:48.574 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:48.574 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:48.574 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:48.574 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.574 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:48.574 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.574 [2024-10-15 09:14:32.335572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.574 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.574 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:48.574 "name": "Existed_Raid", 00:14:48.574 "aliases": [ 00:14:48.574 "f18ae6ec-b405-47ae-af25-41f1f8000103" 00:14:48.574 ], 00:14:48.574 "product_name": "Raid Volume", 00:14:48.574 "block_size": 512, 00:14:48.574 "num_blocks": 196608, 00:14:48.574 "uuid": "f18ae6ec-b405-47ae-af25-41f1f8000103", 00:14:48.574 "assigned_rate_limits": { 00:14:48.574 "rw_ios_per_sec": 0, 00:14:48.574 "rw_mbytes_per_sec": 0, 00:14:48.574 "r_mbytes_per_sec": 0, 00:14:48.574 "w_mbytes_per_sec": 0 00:14:48.574 }, 00:14:48.574 "claimed": false, 00:14:48.574 "zoned": false, 00:14:48.574 "supported_io_types": { 00:14:48.574 "read": true, 00:14:48.574 "write": true, 00:14:48.574 "unmap": true, 00:14:48.574 "flush": true, 00:14:48.574 "reset": true, 00:14:48.574 "nvme_admin": false, 00:14:48.574 "nvme_io": false, 00:14:48.574 "nvme_io_md": false, 00:14:48.574 "write_zeroes": true, 00:14:48.574 "zcopy": false, 00:14:48.574 "get_zone_info": false, 00:14:48.574 "zone_management": false, 00:14:48.574 "zone_append": false, 00:14:48.574 "compare": false, 00:14:48.574 "compare_and_write": false, 00:14:48.574 "abort": false, 00:14:48.574 "seek_hole": false, 00:14:48.574 "seek_data": false, 00:14:48.574 "copy": false, 00:14:48.574 "nvme_iov_md": false 00:14:48.574 }, 00:14:48.574 "memory_domains": [ 00:14:48.574 { 00:14:48.574 "dma_device_id": "system", 00:14:48.574 "dma_device_type": 1 00:14:48.574 }, 00:14:48.574 { 00:14:48.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.574 "dma_device_type": 2 00:14:48.574 }, 00:14:48.574 { 00:14:48.574 "dma_device_id": "system", 00:14:48.574 "dma_device_type": 1 00:14:48.574 }, 00:14:48.574 { 00:14:48.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.574 "dma_device_type": 2 00:14:48.574 }, 00:14:48.574 { 00:14:48.574 "dma_device_id": "system", 00:14:48.574 "dma_device_type": 1 00:14:48.574 }, 00:14:48.574 { 00:14:48.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.574 "dma_device_type": 2 00:14:48.574 } 00:14:48.574 ], 00:14:48.574 "driver_specific": { 00:14:48.574 "raid": { 00:14:48.574 "uuid": "f18ae6ec-b405-47ae-af25-41f1f8000103", 00:14:48.574 "strip_size_kb": 64, 00:14:48.574 "state": "online", 00:14:48.574 "raid_level": "concat", 00:14:48.574 "superblock": false, 00:14:48.574 "num_base_bdevs": 3, 00:14:48.574 "num_base_bdevs_discovered": 3, 00:14:48.574 "num_base_bdevs_operational": 3, 00:14:48.574 "base_bdevs_list": [ 00:14:48.574 { 00:14:48.574 "name": "BaseBdev1", 00:14:48.574 "uuid": "76fb9801-9c4b-4ecf-abf1-16eaa9d5667c", 00:14:48.574 "is_configured": true, 00:14:48.574 "data_offset": 0, 00:14:48.575 "data_size": 65536 00:14:48.575 }, 00:14:48.575 { 00:14:48.575 "name": "BaseBdev2", 00:14:48.575 "uuid": "413bb527-ceec-4a93-8894-438b9b5566e2", 00:14:48.575 "is_configured": true, 00:14:48.575 "data_offset": 0, 00:14:48.575 "data_size": 65536 00:14:48.575 }, 00:14:48.575 { 00:14:48.575 "name": "BaseBdev3", 00:14:48.575 "uuid": "5c612b0d-3da1-41ac-b064-acd2492d27ca", 00:14:48.575 "is_configured": true, 00:14:48.575 "data_offset": 0, 00:14:48.575 "data_size": 65536 00:14:48.575 } 00:14:48.575 ] 00:14:48.575 } 00:14:48.575 } 00:14:48.575 }' 00:14:48.575 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:48.575 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:48.575 BaseBdev2 00:14:48.575 BaseBdev3' 00:14:48.575 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.575 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:48.575 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.575 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.575 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:48.575 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.575 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.833 [2024-10-15 09:14:32.651326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:48.833 [2024-10-15 09:14:32.651498] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.833 [2024-10-15 09:14:32.651721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.833 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.834 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.834 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.834 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.092 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.092 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.092 "name": "Existed_Raid", 00:14:49.092 "uuid": "f18ae6ec-b405-47ae-af25-41f1f8000103", 00:14:49.092 "strip_size_kb": 64, 00:14:49.092 "state": "offline", 00:14:49.092 "raid_level": "concat", 00:14:49.092 "superblock": false, 00:14:49.092 "num_base_bdevs": 3, 00:14:49.092 "num_base_bdevs_discovered": 2, 00:14:49.092 "num_base_bdevs_operational": 2, 00:14:49.092 "base_bdevs_list": [ 00:14:49.092 { 00:14:49.092 "name": null, 00:14:49.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.092 "is_configured": false, 00:14:49.092 "data_offset": 0, 00:14:49.092 "data_size": 65536 00:14:49.092 }, 00:14:49.092 { 00:14:49.092 "name": "BaseBdev2", 00:14:49.092 "uuid": "413bb527-ceec-4a93-8894-438b9b5566e2", 00:14:49.092 "is_configured": true, 00:14:49.092 "data_offset": 0, 00:14:49.092 "data_size": 65536 00:14:49.092 }, 00:14:49.092 { 00:14:49.092 "name": "BaseBdev3", 00:14:49.092 "uuid": "5c612b0d-3da1-41ac-b064-acd2492d27ca", 00:14:49.092 "is_configured": true, 00:14:49.092 "data_offset": 0, 00:14:49.092 "data_size": 65536 00:14:49.092 } 00:14:49.092 ] 00:14:49.092 }' 00:14:49.092 09:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.092 09:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.660 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:49.660 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:49.660 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.660 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:49.660 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.660 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.660 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.660 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:49.660 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:49.660 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:49.660 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.660 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.660 [2024-10-15 09:14:33.364046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:49.660 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.660 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:49.661 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:49.661 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.661 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.661 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.661 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:49.661 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.661 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:49.661 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:49.661 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:49.661 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.661 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.661 [2024-10-15 09:14:33.526803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:49.661 [2024-10-15 09:14:33.527035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:49.919 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.919 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:49.919 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:49.919 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.919 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.919 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:49.919 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.920 BaseBdev2 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.920 [ 00:14:49.920 { 00:14:49.920 "name": "BaseBdev2", 00:14:49.920 "aliases": [ 00:14:49.920 "fc584502-9232-4203-b837-63f2e2a3952b" 00:14:49.920 ], 00:14:49.920 "product_name": "Malloc disk", 00:14:49.920 "block_size": 512, 00:14:49.920 "num_blocks": 65536, 00:14:49.920 "uuid": "fc584502-9232-4203-b837-63f2e2a3952b", 00:14:49.920 "assigned_rate_limits": { 00:14:49.920 "rw_ios_per_sec": 0, 00:14:49.920 "rw_mbytes_per_sec": 0, 00:14:49.920 "r_mbytes_per_sec": 0, 00:14:49.920 "w_mbytes_per_sec": 0 00:14:49.920 }, 00:14:49.920 "claimed": false, 00:14:49.920 "zoned": false, 00:14:49.920 "supported_io_types": { 00:14:49.920 "read": true, 00:14:49.920 "write": true, 00:14:49.920 "unmap": true, 00:14:49.920 "flush": true, 00:14:49.920 "reset": true, 00:14:49.920 "nvme_admin": false, 00:14:49.920 "nvme_io": false, 00:14:49.920 "nvme_io_md": false, 00:14:49.920 "write_zeroes": true, 00:14:49.920 "zcopy": true, 00:14:49.920 "get_zone_info": false, 00:14:49.920 "zone_management": false, 00:14:49.920 "zone_append": false, 00:14:49.920 "compare": false, 00:14:49.920 "compare_and_write": false, 00:14:49.920 "abort": true, 00:14:49.920 "seek_hole": false, 00:14:49.920 "seek_data": false, 00:14:49.920 "copy": true, 00:14:49.920 "nvme_iov_md": false 00:14:49.920 }, 00:14:49.920 "memory_domains": [ 00:14:49.920 { 00:14:49.920 "dma_device_id": "system", 00:14:49.920 "dma_device_type": 1 00:14:49.920 }, 00:14:49.920 { 00:14:49.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.920 "dma_device_type": 2 00:14:49.920 } 00:14:49.920 ], 00:14:49.920 "driver_specific": {} 00:14:49.920 } 00:14:49.920 ] 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.920 BaseBdev3 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.920 [ 00:14:49.920 { 00:14:49.920 "name": "BaseBdev3", 00:14:49.920 "aliases": [ 00:14:49.920 "1ca68248-b5b3-4dc3-9dfd-2e75c2eca020" 00:14:49.920 ], 00:14:49.920 "product_name": "Malloc disk", 00:14:49.920 "block_size": 512, 00:14:49.920 "num_blocks": 65536, 00:14:49.920 "uuid": "1ca68248-b5b3-4dc3-9dfd-2e75c2eca020", 00:14:49.920 "assigned_rate_limits": { 00:14:49.920 "rw_ios_per_sec": 0, 00:14:49.920 "rw_mbytes_per_sec": 0, 00:14:49.920 "r_mbytes_per_sec": 0, 00:14:49.920 "w_mbytes_per_sec": 0 00:14:49.920 }, 00:14:49.920 "claimed": false, 00:14:49.920 "zoned": false, 00:14:49.920 "supported_io_types": { 00:14:49.920 "read": true, 00:14:49.920 "write": true, 00:14:49.920 "unmap": true, 00:14:49.920 "flush": true, 00:14:49.920 "reset": true, 00:14:49.920 "nvme_admin": false, 00:14:49.920 "nvme_io": false, 00:14:49.920 "nvme_io_md": false, 00:14:49.920 "write_zeroes": true, 00:14:49.920 "zcopy": true, 00:14:49.920 "get_zone_info": false, 00:14:49.920 "zone_management": false, 00:14:49.920 "zone_append": false, 00:14:49.920 "compare": false, 00:14:49.920 "compare_and_write": false, 00:14:49.920 "abort": true, 00:14:49.920 "seek_hole": false, 00:14:49.920 "seek_data": false, 00:14:49.920 "copy": true, 00:14:49.920 "nvme_iov_md": false 00:14:49.920 }, 00:14:49.920 "memory_domains": [ 00:14:49.920 { 00:14:49.920 "dma_device_id": "system", 00:14:49.920 "dma_device_type": 1 00:14:49.920 }, 00:14:49.920 { 00:14:49.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.920 "dma_device_type": 2 00:14:49.920 } 00:14:49.920 ], 00:14:49.920 "driver_specific": {} 00:14:49.920 } 00:14:49.920 ] 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.920 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.920 [2024-10-15 09:14:33.845485] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:49.920 [2024-10-15 09:14:33.845677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:49.920 [2024-10-15 09:14:33.845826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:50.179 [2024-10-15 09:14:33.848545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.179 "name": "Existed_Raid", 00:14:50.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.179 "strip_size_kb": 64, 00:14:50.179 "state": "configuring", 00:14:50.179 "raid_level": "concat", 00:14:50.179 "superblock": false, 00:14:50.179 "num_base_bdevs": 3, 00:14:50.179 "num_base_bdevs_discovered": 2, 00:14:50.179 "num_base_bdevs_operational": 3, 00:14:50.179 "base_bdevs_list": [ 00:14:50.179 { 00:14:50.179 "name": "BaseBdev1", 00:14:50.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.179 "is_configured": false, 00:14:50.179 "data_offset": 0, 00:14:50.179 "data_size": 0 00:14:50.179 }, 00:14:50.179 { 00:14:50.179 "name": "BaseBdev2", 00:14:50.179 "uuid": "fc584502-9232-4203-b837-63f2e2a3952b", 00:14:50.179 "is_configured": true, 00:14:50.179 "data_offset": 0, 00:14:50.179 "data_size": 65536 00:14:50.179 }, 00:14:50.179 { 00:14:50.179 "name": "BaseBdev3", 00:14:50.179 "uuid": "1ca68248-b5b3-4dc3-9dfd-2e75c2eca020", 00:14:50.179 "is_configured": true, 00:14:50.179 "data_offset": 0, 00:14:50.179 "data_size": 65536 00:14:50.179 } 00:14:50.179 ] 00:14:50.179 }' 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.179 09:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.469 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:50.469 09:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.469 09:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.469 [2024-10-15 09:14:34.381532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:50.469 09:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.469 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:50.469 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.469 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.469 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:50.469 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.469 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.469 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.469 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.469 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.469 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.469 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.469 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.469 09:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.469 09:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.727 09:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.727 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.727 "name": "Existed_Raid", 00:14:50.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.727 "strip_size_kb": 64, 00:14:50.727 "state": "configuring", 00:14:50.727 "raid_level": "concat", 00:14:50.727 "superblock": false, 00:14:50.727 "num_base_bdevs": 3, 00:14:50.727 "num_base_bdevs_discovered": 1, 00:14:50.727 "num_base_bdevs_operational": 3, 00:14:50.727 "base_bdevs_list": [ 00:14:50.727 { 00:14:50.727 "name": "BaseBdev1", 00:14:50.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.727 "is_configured": false, 00:14:50.727 "data_offset": 0, 00:14:50.727 "data_size": 0 00:14:50.727 }, 00:14:50.727 { 00:14:50.727 "name": null, 00:14:50.727 "uuid": "fc584502-9232-4203-b837-63f2e2a3952b", 00:14:50.727 "is_configured": false, 00:14:50.727 "data_offset": 0, 00:14:50.727 "data_size": 65536 00:14:50.727 }, 00:14:50.727 { 00:14:50.727 "name": "BaseBdev3", 00:14:50.727 "uuid": "1ca68248-b5b3-4dc3-9dfd-2e75c2eca020", 00:14:50.727 "is_configured": true, 00:14:50.727 "data_offset": 0, 00:14:50.727 "data_size": 65536 00:14:50.727 } 00:14:50.727 ] 00:14:50.727 }' 00:14:50.727 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.727 09:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.297 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.297 09:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.297 09:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.297 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:51.297 09:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.297 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:51.297 09:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:51.297 09:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.297 09:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.297 [2024-10-15 09:14:35.032259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.297 BaseBdev1 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.297 [ 00:14:51.297 { 00:14:51.297 "name": "BaseBdev1", 00:14:51.297 "aliases": [ 00:14:51.297 "2a8fe6d8-2e70-419b-a2da-8acc633297d7" 00:14:51.297 ], 00:14:51.297 "product_name": "Malloc disk", 00:14:51.297 "block_size": 512, 00:14:51.297 "num_blocks": 65536, 00:14:51.297 "uuid": "2a8fe6d8-2e70-419b-a2da-8acc633297d7", 00:14:51.297 "assigned_rate_limits": { 00:14:51.297 "rw_ios_per_sec": 0, 00:14:51.297 "rw_mbytes_per_sec": 0, 00:14:51.297 "r_mbytes_per_sec": 0, 00:14:51.297 "w_mbytes_per_sec": 0 00:14:51.297 }, 00:14:51.297 "claimed": true, 00:14:51.297 "claim_type": "exclusive_write", 00:14:51.297 "zoned": false, 00:14:51.297 "supported_io_types": { 00:14:51.297 "read": true, 00:14:51.297 "write": true, 00:14:51.297 "unmap": true, 00:14:51.297 "flush": true, 00:14:51.297 "reset": true, 00:14:51.297 "nvme_admin": false, 00:14:51.297 "nvme_io": false, 00:14:51.297 "nvme_io_md": false, 00:14:51.297 "write_zeroes": true, 00:14:51.297 "zcopy": true, 00:14:51.297 "get_zone_info": false, 00:14:51.297 "zone_management": false, 00:14:51.297 "zone_append": false, 00:14:51.297 "compare": false, 00:14:51.297 "compare_and_write": false, 00:14:51.297 "abort": true, 00:14:51.297 "seek_hole": false, 00:14:51.297 "seek_data": false, 00:14:51.297 "copy": true, 00:14:51.297 "nvme_iov_md": false 00:14:51.297 }, 00:14:51.297 "memory_domains": [ 00:14:51.297 { 00:14:51.297 "dma_device_id": "system", 00:14:51.297 "dma_device_type": 1 00:14:51.297 }, 00:14:51.297 { 00:14:51.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.297 "dma_device_type": 2 00:14:51.297 } 00:14:51.297 ], 00:14:51.297 "driver_specific": {} 00:14:51.297 } 00:14:51.297 ] 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.297 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.297 "name": "Existed_Raid", 00:14:51.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.297 "strip_size_kb": 64, 00:14:51.297 "state": "configuring", 00:14:51.297 "raid_level": "concat", 00:14:51.297 "superblock": false, 00:14:51.297 "num_base_bdevs": 3, 00:14:51.297 "num_base_bdevs_discovered": 2, 00:14:51.297 "num_base_bdevs_operational": 3, 00:14:51.297 "base_bdevs_list": [ 00:14:51.297 { 00:14:51.297 "name": "BaseBdev1", 00:14:51.297 "uuid": "2a8fe6d8-2e70-419b-a2da-8acc633297d7", 00:14:51.297 "is_configured": true, 00:14:51.297 "data_offset": 0, 00:14:51.297 "data_size": 65536 00:14:51.297 }, 00:14:51.297 { 00:14:51.297 "name": null, 00:14:51.297 "uuid": "fc584502-9232-4203-b837-63f2e2a3952b", 00:14:51.297 "is_configured": false, 00:14:51.298 "data_offset": 0, 00:14:51.298 "data_size": 65536 00:14:51.298 }, 00:14:51.298 { 00:14:51.298 "name": "BaseBdev3", 00:14:51.298 "uuid": "1ca68248-b5b3-4dc3-9dfd-2e75c2eca020", 00:14:51.298 "is_configured": true, 00:14:51.298 "data_offset": 0, 00:14:51.298 "data_size": 65536 00:14:51.298 } 00:14:51.298 ] 00:14:51.298 }' 00:14:51.298 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.298 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.864 [2024-10-15 09:14:35.648528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.864 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.865 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.865 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.865 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.865 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.865 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.865 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.865 "name": "Existed_Raid", 00:14:51.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.865 "strip_size_kb": 64, 00:14:51.865 "state": "configuring", 00:14:51.865 "raid_level": "concat", 00:14:51.865 "superblock": false, 00:14:51.865 "num_base_bdevs": 3, 00:14:51.865 "num_base_bdevs_discovered": 1, 00:14:51.865 "num_base_bdevs_operational": 3, 00:14:51.865 "base_bdevs_list": [ 00:14:51.865 { 00:14:51.865 "name": "BaseBdev1", 00:14:51.865 "uuid": "2a8fe6d8-2e70-419b-a2da-8acc633297d7", 00:14:51.865 "is_configured": true, 00:14:51.865 "data_offset": 0, 00:14:51.865 "data_size": 65536 00:14:51.865 }, 00:14:51.865 { 00:14:51.865 "name": null, 00:14:51.865 "uuid": "fc584502-9232-4203-b837-63f2e2a3952b", 00:14:51.865 "is_configured": false, 00:14:51.865 "data_offset": 0, 00:14:51.865 "data_size": 65536 00:14:51.865 }, 00:14:51.865 { 00:14:51.865 "name": null, 00:14:51.865 "uuid": "1ca68248-b5b3-4dc3-9dfd-2e75c2eca020", 00:14:51.865 "is_configured": false, 00:14:51.865 "data_offset": 0, 00:14:51.865 "data_size": 65536 00:14:51.865 } 00:14:51.865 ] 00:14:51.865 }' 00:14:51.865 09:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.865 09:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.431 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:52.431 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.431 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.431 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.431 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.431 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:52.431 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:52.431 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.432 [2024-10-15 09:14:36.224743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.432 "name": "Existed_Raid", 00:14:52.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.432 "strip_size_kb": 64, 00:14:52.432 "state": "configuring", 00:14:52.432 "raid_level": "concat", 00:14:52.432 "superblock": false, 00:14:52.432 "num_base_bdevs": 3, 00:14:52.432 "num_base_bdevs_discovered": 2, 00:14:52.432 "num_base_bdevs_operational": 3, 00:14:52.432 "base_bdevs_list": [ 00:14:52.432 { 00:14:52.432 "name": "BaseBdev1", 00:14:52.432 "uuid": "2a8fe6d8-2e70-419b-a2da-8acc633297d7", 00:14:52.432 "is_configured": true, 00:14:52.432 "data_offset": 0, 00:14:52.432 "data_size": 65536 00:14:52.432 }, 00:14:52.432 { 00:14:52.432 "name": null, 00:14:52.432 "uuid": "fc584502-9232-4203-b837-63f2e2a3952b", 00:14:52.432 "is_configured": false, 00:14:52.432 "data_offset": 0, 00:14:52.432 "data_size": 65536 00:14:52.432 }, 00:14:52.432 { 00:14:52.432 "name": "BaseBdev3", 00:14:52.432 "uuid": "1ca68248-b5b3-4dc3-9dfd-2e75c2eca020", 00:14:52.432 "is_configured": true, 00:14:52.432 "data_offset": 0, 00:14:52.432 "data_size": 65536 00:14:52.432 } 00:14:52.432 ] 00:14:52.432 }' 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.432 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.000 [2024-10-15 09:14:36.800923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.000 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.258 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.258 "name": "Existed_Raid", 00:14:53.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.258 "strip_size_kb": 64, 00:14:53.258 "state": "configuring", 00:14:53.258 "raid_level": "concat", 00:14:53.258 "superblock": false, 00:14:53.258 "num_base_bdevs": 3, 00:14:53.258 "num_base_bdevs_discovered": 1, 00:14:53.258 "num_base_bdevs_operational": 3, 00:14:53.258 "base_bdevs_list": [ 00:14:53.258 { 00:14:53.258 "name": null, 00:14:53.258 "uuid": "2a8fe6d8-2e70-419b-a2da-8acc633297d7", 00:14:53.258 "is_configured": false, 00:14:53.258 "data_offset": 0, 00:14:53.258 "data_size": 65536 00:14:53.258 }, 00:14:53.258 { 00:14:53.258 "name": null, 00:14:53.258 "uuid": "fc584502-9232-4203-b837-63f2e2a3952b", 00:14:53.258 "is_configured": false, 00:14:53.258 "data_offset": 0, 00:14:53.258 "data_size": 65536 00:14:53.258 }, 00:14:53.258 { 00:14:53.258 "name": "BaseBdev3", 00:14:53.258 "uuid": "1ca68248-b5b3-4dc3-9dfd-2e75c2eca020", 00:14:53.258 "is_configured": true, 00:14:53.258 "data_offset": 0, 00:14:53.258 "data_size": 65536 00:14:53.258 } 00:14:53.258 ] 00:14:53.258 }' 00:14:53.258 09:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.258 09:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.517 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.517 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:53.517 09:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.517 09:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.517 09:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.776 [2024-10-15 09:14:37.463003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.776 "name": "Existed_Raid", 00:14:53.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.776 "strip_size_kb": 64, 00:14:53.776 "state": "configuring", 00:14:53.776 "raid_level": "concat", 00:14:53.776 "superblock": false, 00:14:53.776 "num_base_bdevs": 3, 00:14:53.776 "num_base_bdevs_discovered": 2, 00:14:53.776 "num_base_bdevs_operational": 3, 00:14:53.776 "base_bdevs_list": [ 00:14:53.776 { 00:14:53.776 "name": null, 00:14:53.776 "uuid": "2a8fe6d8-2e70-419b-a2da-8acc633297d7", 00:14:53.776 "is_configured": false, 00:14:53.776 "data_offset": 0, 00:14:53.776 "data_size": 65536 00:14:53.776 }, 00:14:53.776 { 00:14:53.776 "name": "BaseBdev2", 00:14:53.776 "uuid": "fc584502-9232-4203-b837-63f2e2a3952b", 00:14:53.776 "is_configured": true, 00:14:53.776 "data_offset": 0, 00:14:53.776 "data_size": 65536 00:14:53.776 }, 00:14:53.776 { 00:14:53.776 "name": "BaseBdev3", 00:14:53.776 "uuid": "1ca68248-b5b3-4dc3-9dfd-2e75c2eca020", 00:14:53.776 "is_configured": true, 00:14:53.776 "data_offset": 0, 00:14:53.776 "data_size": 65536 00:14:53.776 } 00:14:53.776 ] 00:14:53.776 }' 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.776 09:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.343 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.343 09:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:54.343 09:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.343 09:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.343 09:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2a8fe6d8-2e70-419b-a2da-8acc633297d7 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.343 [2024-10-15 09:14:38.122451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:54.343 [2024-10-15 09:14:38.122544] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:54.343 [2024-10-15 09:14:38.122592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:54.343 [2024-10-15 09:14:38.122945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:54.343 [2024-10-15 09:14:38.123143] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:54.343 [2024-10-15 09:14:38.123160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:54.343 [2024-10-15 09:14:38.123619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.343 NewBaseBdev 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.343 [ 00:14:54.343 { 00:14:54.343 "name": "NewBaseBdev", 00:14:54.343 "aliases": [ 00:14:54.343 "2a8fe6d8-2e70-419b-a2da-8acc633297d7" 00:14:54.343 ], 00:14:54.343 "product_name": "Malloc disk", 00:14:54.343 "block_size": 512, 00:14:54.343 "num_blocks": 65536, 00:14:54.343 "uuid": "2a8fe6d8-2e70-419b-a2da-8acc633297d7", 00:14:54.343 "assigned_rate_limits": { 00:14:54.343 "rw_ios_per_sec": 0, 00:14:54.343 "rw_mbytes_per_sec": 0, 00:14:54.343 "r_mbytes_per_sec": 0, 00:14:54.343 "w_mbytes_per_sec": 0 00:14:54.343 }, 00:14:54.343 "claimed": true, 00:14:54.343 "claim_type": "exclusive_write", 00:14:54.343 "zoned": false, 00:14:54.343 "supported_io_types": { 00:14:54.343 "read": true, 00:14:54.343 "write": true, 00:14:54.343 "unmap": true, 00:14:54.343 "flush": true, 00:14:54.343 "reset": true, 00:14:54.343 "nvme_admin": false, 00:14:54.343 "nvme_io": false, 00:14:54.343 "nvme_io_md": false, 00:14:54.343 "write_zeroes": true, 00:14:54.343 "zcopy": true, 00:14:54.343 "get_zone_info": false, 00:14:54.343 "zone_management": false, 00:14:54.343 "zone_append": false, 00:14:54.343 "compare": false, 00:14:54.343 "compare_and_write": false, 00:14:54.343 "abort": true, 00:14:54.343 "seek_hole": false, 00:14:54.343 "seek_data": false, 00:14:54.343 "copy": true, 00:14:54.343 "nvme_iov_md": false 00:14:54.343 }, 00:14:54.343 "memory_domains": [ 00:14:54.343 { 00:14:54.343 "dma_device_id": "system", 00:14:54.343 "dma_device_type": 1 00:14:54.343 }, 00:14:54.343 { 00:14:54.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.343 "dma_device_type": 2 00:14:54.343 } 00:14:54.343 ], 00:14:54.343 "driver_specific": {} 00:14:54.343 } 00:14:54.343 ] 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.343 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.344 "name": "Existed_Raid", 00:14:54.344 "uuid": "cca55a81-9e25-4845-b690-f3ac1323d44f", 00:14:54.344 "strip_size_kb": 64, 00:14:54.344 "state": "online", 00:14:54.344 "raid_level": "concat", 00:14:54.344 "superblock": false, 00:14:54.344 "num_base_bdevs": 3, 00:14:54.344 "num_base_bdevs_discovered": 3, 00:14:54.344 "num_base_bdevs_operational": 3, 00:14:54.344 "base_bdevs_list": [ 00:14:54.344 { 00:14:54.344 "name": "NewBaseBdev", 00:14:54.344 "uuid": "2a8fe6d8-2e70-419b-a2da-8acc633297d7", 00:14:54.344 "is_configured": true, 00:14:54.344 "data_offset": 0, 00:14:54.344 "data_size": 65536 00:14:54.344 }, 00:14:54.344 { 00:14:54.344 "name": "BaseBdev2", 00:14:54.344 "uuid": "fc584502-9232-4203-b837-63f2e2a3952b", 00:14:54.344 "is_configured": true, 00:14:54.344 "data_offset": 0, 00:14:54.344 "data_size": 65536 00:14:54.344 }, 00:14:54.344 { 00:14:54.344 "name": "BaseBdev3", 00:14:54.344 "uuid": "1ca68248-b5b3-4dc3-9dfd-2e75c2eca020", 00:14:54.344 "is_configured": true, 00:14:54.344 "data_offset": 0, 00:14:54.344 "data_size": 65536 00:14:54.344 } 00:14:54.344 ] 00:14:54.344 }' 00:14:54.344 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.344 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.911 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:54.911 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:54.911 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:54.911 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:54.911 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:54.911 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:54.911 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:54.911 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:54.911 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.911 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.911 [2024-10-15 09:14:38.703175] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.911 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.911 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:54.911 "name": "Existed_Raid", 00:14:54.911 "aliases": [ 00:14:54.911 "cca55a81-9e25-4845-b690-f3ac1323d44f" 00:14:54.911 ], 00:14:54.911 "product_name": "Raid Volume", 00:14:54.911 "block_size": 512, 00:14:54.911 "num_blocks": 196608, 00:14:54.911 "uuid": "cca55a81-9e25-4845-b690-f3ac1323d44f", 00:14:54.911 "assigned_rate_limits": { 00:14:54.911 "rw_ios_per_sec": 0, 00:14:54.911 "rw_mbytes_per_sec": 0, 00:14:54.911 "r_mbytes_per_sec": 0, 00:14:54.911 "w_mbytes_per_sec": 0 00:14:54.911 }, 00:14:54.911 "claimed": false, 00:14:54.911 "zoned": false, 00:14:54.911 "supported_io_types": { 00:14:54.911 "read": true, 00:14:54.911 "write": true, 00:14:54.911 "unmap": true, 00:14:54.911 "flush": true, 00:14:54.911 "reset": true, 00:14:54.911 "nvme_admin": false, 00:14:54.911 "nvme_io": false, 00:14:54.911 "nvme_io_md": false, 00:14:54.911 "write_zeroes": true, 00:14:54.911 "zcopy": false, 00:14:54.911 "get_zone_info": false, 00:14:54.911 "zone_management": false, 00:14:54.911 "zone_append": false, 00:14:54.911 "compare": false, 00:14:54.911 "compare_and_write": false, 00:14:54.911 "abort": false, 00:14:54.911 "seek_hole": false, 00:14:54.911 "seek_data": false, 00:14:54.911 "copy": false, 00:14:54.911 "nvme_iov_md": false 00:14:54.911 }, 00:14:54.911 "memory_domains": [ 00:14:54.911 { 00:14:54.911 "dma_device_id": "system", 00:14:54.911 "dma_device_type": 1 00:14:54.911 }, 00:14:54.911 { 00:14:54.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.911 "dma_device_type": 2 00:14:54.911 }, 00:14:54.911 { 00:14:54.911 "dma_device_id": "system", 00:14:54.911 "dma_device_type": 1 00:14:54.911 }, 00:14:54.911 { 00:14:54.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.911 "dma_device_type": 2 00:14:54.911 }, 00:14:54.911 { 00:14:54.911 "dma_device_id": "system", 00:14:54.911 "dma_device_type": 1 00:14:54.911 }, 00:14:54.911 { 00:14:54.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.911 "dma_device_type": 2 00:14:54.911 } 00:14:54.911 ], 00:14:54.911 "driver_specific": { 00:14:54.911 "raid": { 00:14:54.911 "uuid": "cca55a81-9e25-4845-b690-f3ac1323d44f", 00:14:54.911 "strip_size_kb": 64, 00:14:54.911 "state": "online", 00:14:54.911 "raid_level": "concat", 00:14:54.911 "superblock": false, 00:14:54.911 "num_base_bdevs": 3, 00:14:54.911 "num_base_bdevs_discovered": 3, 00:14:54.911 "num_base_bdevs_operational": 3, 00:14:54.911 "base_bdevs_list": [ 00:14:54.912 { 00:14:54.912 "name": "NewBaseBdev", 00:14:54.912 "uuid": "2a8fe6d8-2e70-419b-a2da-8acc633297d7", 00:14:54.912 "is_configured": true, 00:14:54.912 "data_offset": 0, 00:14:54.912 "data_size": 65536 00:14:54.912 }, 00:14:54.912 { 00:14:54.912 "name": "BaseBdev2", 00:14:54.912 "uuid": "fc584502-9232-4203-b837-63f2e2a3952b", 00:14:54.912 "is_configured": true, 00:14:54.912 "data_offset": 0, 00:14:54.912 "data_size": 65536 00:14:54.912 }, 00:14:54.912 { 00:14:54.912 "name": "BaseBdev3", 00:14:54.912 "uuid": "1ca68248-b5b3-4dc3-9dfd-2e75c2eca020", 00:14:54.912 "is_configured": true, 00:14:54.912 "data_offset": 0, 00:14:54.912 "data_size": 65536 00:14:54.912 } 00:14:54.912 ] 00:14:54.912 } 00:14:54.912 } 00:14:54.912 }' 00:14:54.912 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:54.912 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:54.912 BaseBdev2 00:14:54.912 BaseBdev3' 00:14:54.912 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.171 09:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.171 09:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.171 09:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.171 09:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:55.171 09:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.171 09:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.171 [2024-10-15 09:14:39.026878] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:55.171 [2024-10-15 09:14:39.027049] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.171 [2024-10-15 09:14:39.027327] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.171 [2024-10-15 09:14:39.027425] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.171 [2024-10-15 09:14:39.027450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:55.171 09:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.171 09:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65827 00:14:55.171 09:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 65827 ']' 00:14:55.171 09:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 65827 00:14:55.171 09:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:55.171 09:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:55.171 09:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65827 00:14:55.171 killing process with pid 65827 00:14:55.171 09:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:55.171 09:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:55.171 09:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65827' 00:14:55.171 09:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 65827 00:14:55.171 09:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 65827 00:14:55.171 [2024-10-15 09:14:39.063776] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:55.430 [2024-10-15 09:14:39.354183] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:56.806 09:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:56.806 00:14:56.806 real 0m12.207s 00:14:56.806 user 0m20.054s 00:14:56.806 sys 0m1.762s 00:14:56.806 09:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:56.806 09:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.806 ************************************ 00:14:56.806 END TEST raid_state_function_test 00:14:56.806 ************************************ 00:14:56.806 09:14:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:14:56.806 09:14:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:56.806 09:14:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:56.806 09:14:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:56.806 ************************************ 00:14:56.806 START TEST raid_state_function_test_sb 00:14:56.806 ************************************ 00:14:56.806 09:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:14:56.806 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:56.806 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:56.806 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:56.806 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:56.806 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:56.806 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66465 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66465' 00:14:56.807 Process raid pid: 66465 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66465 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 66465 ']' 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:56.807 09:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.807 [2024-10-15 09:14:40.689557] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:14:56.807 [2024-10-15 09:14:40.689751] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.066 [2024-10-15 09:14:40.865373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.324 [2024-10-15 09:14:41.015576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.324 [2024-10-15 09:14:41.243659] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.324 [2024-10-15 09:14:41.243720] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.890 [2024-10-15 09:14:41.644093] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:57.890 [2024-10-15 09:14:41.644379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:57.890 [2024-10-15 09:14:41.644510] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:57.890 [2024-10-15 09:14:41.644651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:57.890 [2024-10-15 09:14:41.644769] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:57.890 [2024-10-15 09:14:41.644901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.890 09:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.890 "name": "Existed_Raid", 00:14:57.890 "uuid": "b9f9f029-3e26-4da1-ae3d-e592000c77d6", 00:14:57.890 "strip_size_kb": 64, 00:14:57.890 "state": "configuring", 00:14:57.890 "raid_level": "concat", 00:14:57.890 "superblock": true, 00:14:57.890 "num_base_bdevs": 3, 00:14:57.890 "num_base_bdevs_discovered": 0, 00:14:57.890 "num_base_bdevs_operational": 3, 00:14:57.890 "base_bdevs_list": [ 00:14:57.890 { 00:14:57.890 "name": "BaseBdev1", 00:14:57.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.890 "is_configured": false, 00:14:57.890 "data_offset": 0, 00:14:57.890 "data_size": 0 00:14:57.890 }, 00:14:57.890 { 00:14:57.890 "name": "BaseBdev2", 00:14:57.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.890 "is_configured": false, 00:14:57.890 "data_offset": 0, 00:14:57.890 "data_size": 0 00:14:57.890 }, 00:14:57.890 { 00:14:57.890 "name": "BaseBdev3", 00:14:57.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.890 "is_configured": false, 00:14:57.891 "data_offset": 0, 00:14:57.891 "data_size": 0 00:14:57.891 } 00:14:57.891 ] 00:14:57.891 }' 00:14:57.891 09:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.891 09:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.458 [2024-10-15 09:14:42.180304] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:58.458 [2024-10-15 09:14:42.180356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.458 [2024-10-15 09:14:42.188291] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:58.458 [2024-10-15 09:14:42.188486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:58.458 [2024-10-15 09:14:42.188614] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:58.458 [2024-10-15 09:14:42.188755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:58.458 [2024-10-15 09:14:42.188872] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:58.458 [2024-10-15 09:14:42.189044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.458 [2024-10-15 09:14:42.238266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.458 BaseBdev1 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.458 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.458 [ 00:14:58.458 { 00:14:58.458 "name": "BaseBdev1", 00:14:58.458 "aliases": [ 00:14:58.458 "5d3a0cc8-b085-49b8-8988-35a2b5f16b82" 00:14:58.458 ], 00:14:58.458 "product_name": "Malloc disk", 00:14:58.458 "block_size": 512, 00:14:58.458 "num_blocks": 65536, 00:14:58.458 "uuid": "5d3a0cc8-b085-49b8-8988-35a2b5f16b82", 00:14:58.458 "assigned_rate_limits": { 00:14:58.458 "rw_ios_per_sec": 0, 00:14:58.458 "rw_mbytes_per_sec": 0, 00:14:58.458 "r_mbytes_per_sec": 0, 00:14:58.458 "w_mbytes_per_sec": 0 00:14:58.458 }, 00:14:58.458 "claimed": true, 00:14:58.458 "claim_type": "exclusive_write", 00:14:58.458 "zoned": false, 00:14:58.458 "supported_io_types": { 00:14:58.458 "read": true, 00:14:58.458 "write": true, 00:14:58.458 "unmap": true, 00:14:58.458 "flush": true, 00:14:58.458 "reset": true, 00:14:58.458 "nvme_admin": false, 00:14:58.458 "nvme_io": false, 00:14:58.458 "nvme_io_md": false, 00:14:58.458 "write_zeroes": true, 00:14:58.458 "zcopy": true, 00:14:58.458 "get_zone_info": false, 00:14:58.458 "zone_management": false, 00:14:58.458 "zone_append": false, 00:14:58.458 "compare": false, 00:14:58.458 "compare_and_write": false, 00:14:58.458 "abort": true, 00:14:58.458 "seek_hole": false, 00:14:58.459 "seek_data": false, 00:14:58.459 "copy": true, 00:14:58.459 "nvme_iov_md": false 00:14:58.459 }, 00:14:58.459 "memory_domains": [ 00:14:58.459 { 00:14:58.459 "dma_device_id": "system", 00:14:58.459 "dma_device_type": 1 00:14:58.459 }, 00:14:58.459 { 00:14:58.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.459 "dma_device_type": 2 00:14:58.459 } 00:14:58.459 ], 00:14:58.459 "driver_specific": {} 00:14:58.459 } 00:14:58.459 ] 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.459 "name": "Existed_Raid", 00:14:58.459 "uuid": "f1b694de-f2b8-49cf-9c8c-e09924887189", 00:14:58.459 "strip_size_kb": 64, 00:14:58.459 "state": "configuring", 00:14:58.459 "raid_level": "concat", 00:14:58.459 "superblock": true, 00:14:58.459 "num_base_bdevs": 3, 00:14:58.459 "num_base_bdevs_discovered": 1, 00:14:58.459 "num_base_bdevs_operational": 3, 00:14:58.459 "base_bdevs_list": [ 00:14:58.459 { 00:14:58.459 "name": "BaseBdev1", 00:14:58.459 "uuid": "5d3a0cc8-b085-49b8-8988-35a2b5f16b82", 00:14:58.459 "is_configured": true, 00:14:58.459 "data_offset": 2048, 00:14:58.459 "data_size": 63488 00:14:58.459 }, 00:14:58.459 { 00:14:58.459 "name": "BaseBdev2", 00:14:58.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.459 "is_configured": false, 00:14:58.459 "data_offset": 0, 00:14:58.459 "data_size": 0 00:14:58.459 }, 00:14:58.459 { 00:14:58.459 "name": "BaseBdev3", 00:14:58.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.459 "is_configured": false, 00:14:58.459 "data_offset": 0, 00:14:58.459 "data_size": 0 00:14:58.459 } 00:14:58.459 ] 00:14:58.459 }' 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.459 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.026 [2024-10-15 09:14:42.762543] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:59.026 [2024-10-15 09:14:42.762656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.026 [2024-10-15 09:14:42.770635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.026 [2024-10-15 09:14:42.773449] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:59.026 [2024-10-15 09:14:42.773522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:59.026 [2024-10-15 09:14:42.773540] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:59.026 [2024-10-15 09:14:42.773557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.026 "name": "Existed_Raid", 00:14:59.026 "uuid": "ebbfb183-bccb-44d4-9f8b-2a6caa70b7c4", 00:14:59.026 "strip_size_kb": 64, 00:14:59.026 "state": "configuring", 00:14:59.026 "raid_level": "concat", 00:14:59.026 "superblock": true, 00:14:59.026 "num_base_bdevs": 3, 00:14:59.026 "num_base_bdevs_discovered": 1, 00:14:59.026 "num_base_bdevs_operational": 3, 00:14:59.026 "base_bdevs_list": [ 00:14:59.026 { 00:14:59.026 "name": "BaseBdev1", 00:14:59.026 "uuid": "5d3a0cc8-b085-49b8-8988-35a2b5f16b82", 00:14:59.026 "is_configured": true, 00:14:59.026 "data_offset": 2048, 00:14:59.026 "data_size": 63488 00:14:59.026 }, 00:14:59.026 { 00:14:59.026 "name": "BaseBdev2", 00:14:59.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.026 "is_configured": false, 00:14:59.026 "data_offset": 0, 00:14:59.026 "data_size": 0 00:14:59.026 }, 00:14:59.026 { 00:14:59.026 "name": "BaseBdev3", 00:14:59.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.026 "is_configured": false, 00:14:59.026 "data_offset": 0, 00:14:59.026 "data_size": 0 00:14:59.026 } 00:14:59.026 ] 00:14:59.026 }' 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.026 09:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.593 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:59.593 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.593 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.593 [2024-10-15 09:14:43.363169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.593 BaseBdev2 00:14:59.593 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.593 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:59.593 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:59.593 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:59.593 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:59.593 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:59.593 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:59.593 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:59.593 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.593 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.593 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.593 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.594 [ 00:14:59.594 { 00:14:59.594 "name": "BaseBdev2", 00:14:59.594 "aliases": [ 00:14:59.594 "ffd8453f-721f-4420-96ae-1132b75a8930" 00:14:59.594 ], 00:14:59.594 "product_name": "Malloc disk", 00:14:59.594 "block_size": 512, 00:14:59.594 "num_blocks": 65536, 00:14:59.594 "uuid": "ffd8453f-721f-4420-96ae-1132b75a8930", 00:14:59.594 "assigned_rate_limits": { 00:14:59.594 "rw_ios_per_sec": 0, 00:14:59.594 "rw_mbytes_per_sec": 0, 00:14:59.594 "r_mbytes_per_sec": 0, 00:14:59.594 "w_mbytes_per_sec": 0 00:14:59.594 }, 00:14:59.594 "claimed": true, 00:14:59.594 "claim_type": "exclusive_write", 00:14:59.594 "zoned": false, 00:14:59.594 "supported_io_types": { 00:14:59.594 "read": true, 00:14:59.594 "write": true, 00:14:59.594 "unmap": true, 00:14:59.594 "flush": true, 00:14:59.594 "reset": true, 00:14:59.594 "nvme_admin": false, 00:14:59.594 "nvme_io": false, 00:14:59.594 "nvme_io_md": false, 00:14:59.594 "write_zeroes": true, 00:14:59.594 "zcopy": true, 00:14:59.594 "get_zone_info": false, 00:14:59.594 "zone_management": false, 00:14:59.594 "zone_append": false, 00:14:59.594 "compare": false, 00:14:59.594 "compare_and_write": false, 00:14:59.594 "abort": true, 00:14:59.594 "seek_hole": false, 00:14:59.594 "seek_data": false, 00:14:59.594 "copy": true, 00:14:59.594 "nvme_iov_md": false 00:14:59.594 }, 00:14:59.594 "memory_domains": [ 00:14:59.594 { 00:14:59.594 "dma_device_id": "system", 00:14:59.594 "dma_device_type": 1 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.594 "dma_device_type": 2 00:14:59.594 } 00:14:59.594 ], 00:14:59.594 "driver_specific": {} 00:14:59.594 } 00:14:59.594 ] 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.594 "name": "Existed_Raid", 00:14:59.594 "uuid": "ebbfb183-bccb-44d4-9f8b-2a6caa70b7c4", 00:14:59.594 "strip_size_kb": 64, 00:14:59.594 "state": "configuring", 00:14:59.594 "raid_level": "concat", 00:14:59.594 "superblock": true, 00:14:59.594 "num_base_bdevs": 3, 00:14:59.594 "num_base_bdevs_discovered": 2, 00:14:59.594 "num_base_bdevs_operational": 3, 00:14:59.594 "base_bdevs_list": [ 00:14:59.594 { 00:14:59.594 "name": "BaseBdev1", 00:14:59.594 "uuid": "5d3a0cc8-b085-49b8-8988-35a2b5f16b82", 00:14:59.594 "is_configured": true, 00:14:59.594 "data_offset": 2048, 00:14:59.594 "data_size": 63488 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "name": "BaseBdev2", 00:14:59.594 "uuid": "ffd8453f-721f-4420-96ae-1132b75a8930", 00:14:59.594 "is_configured": true, 00:14:59.594 "data_offset": 2048, 00:14:59.594 "data_size": 63488 00:14:59.594 }, 00:14:59.594 { 00:14:59.594 "name": "BaseBdev3", 00:14:59.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.594 "is_configured": false, 00:14:59.594 "data_offset": 0, 00:14:59.594 "data_size": 0 00:14:59.594 } 00:14:59.594 ] 00:14:59.594 }' 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.594 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.163 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:00.163 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.163 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.163 [2024-10-15 09:14:43.991227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:00.163 [2024-10-15 09:14:43.991583] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:00.163 [2024-10-15 09:14:43.991616] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:00.163 [2024-10-15 09:14:43.991979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:00.163 BaseBdev3 00:15:00.163 [2024-10-15 09:14:43.992219] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:00.163 [2024-10-15 09:14:43.992237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:00.163 [2024-10-15 09:14:43.992431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.163 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.163 09:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:00.163 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:00.163 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:00.163 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:00.163 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:00.163 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:00.163 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:00.163 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.163 09:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.163 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.163 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:00.163 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.163 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.163 [ 00:15:00.163 { 00:15:00.163 "name": "BaseBdev3", 00:15:00.163 "aliases": [ 00:15:00.163 "2bc1b9e5-854f-4562-a74b-053c63d2f802" 00:15:00.163 ], 00:15:00.163 "product_name": "Malloc disk", 00:15:00.163 "block_size": 512, 00:15:00.163 "num_blocks": 65536, 00:15:00.163 "uuid": "2bc1b9e5-854f-4562-a74b-053c63d2f802", 00:15:00.163 "assigned_rate_limits": { 00:15:00.163 "rw_ios_per_sec": 0, 00:15:00.163 "rw_mbytes_per_sec": 0, 00:15:00.163 "r_mbytes_per_sec": 0, 00:15:00.163 "w_mbytes_per_sec": 0 00:15:00.163 }, 00:15:00.163 "claimed": true, 00:15:00.163 "claim_type": "exclusive_write", 00:15:00.163 "zoned": false, 00:15:00.163 "supported_io_types": { 00:15:00.163 "read": true, 00:15:00.163 "write": true, 00:15:00.163 "unmap": true, 00:15:00.163 "flush": true, 00:15:00.163 "reset": true, 00:15:00.163 "nvme_admin": false, 00:15:00.163 "nvme_io": false, 00:15:00.163 "nvme_io_md": false, 00:15:00.163 "write_zeroes": true, 00:15:00.163 "zcopy": true, 00:15:00.163 "get_zone_info": false, 00:15:00.163 "zone_management": false, 00:15:00.163 "zone_append": false, 00:15:00.163 "compare": false, 00:15:00.164 "compare_and_write": false, 00:15:00.164 "abort": true, 00:15:00.164 "seek_hole": false, 00:15:00.164 "seek_data": false, 00:15:00.164 "copy": true, 00:15:00.164 "nvme_iov_md": false 00:15:00.164 }, 00:15:00.164 "memory_domains": [ 00:15:00.164 { 00:15:00.164 "dma_device_id": "system", 00:15:00.164 "dma_device_type": 1 00:15:00.164 }, 00:15:00.164 { 00:15:00.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.164 "dma_device_type": 2 00:15:00.164 } 00:15:00.164 ], 00:15:00.164 "driver_specific": {} 00:15:00.164 } 00:15:00.164 ] 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.164 "name": "Existed_Raid", 00:15:00.164 "uuid": "ebbfb183-bccb-44d4-9f8b-2a6caa70b7c4", 00:15:00.164 "strip_size_kb": 64, 00:15:00.164 "state": "online", 00:15:00.164 "raid_level": "concat", 00:15:00.164 "superblock": true, 00:15:00.164 "num_base_bdevs": 3, 00:15:00.164 "num_base_bdevs_discovered": 3, 00:15:00.164 "num_base_bdevs_operational": 3, 00:15:00.164 "base_bdevs_list": [ 00:15:00.164 { 00:15:00.164 "name": "BaseBdev1", 00:15:00.164 "uuid": "5d3a0cc8-b085-49b8-8988-35a2b5f16b82", 00:15:00.164 "is_configured": true, 00:15:00.164 "data_offset": 2048, 00:15:00.164 "data_size": 63488 00:15:00.164 }, 00:15:00.164 { 00:15:00.164 "name": "BaseBdev2", 00:15:00.164 "uuid": "ffd8453f-721f-4420-96ae-1132b75a8930", 00:15:00.164 "is_configured": true, 00:15:00.164 "data_offset": 2048, 00:15:00.164 "data_size": 63488 00:15:00.164 }, 00:15:00.164 { 00:15:00.164 "name": "BaseBdev3", 00:15:00.164 "uuid": "2bc1b9e5-854f-4562-a74b-053c63d2f802", 00:15:00.164 "is_configured": true, 00:15:00.164 "data_offset": 2048, 00:15:00.164 "data_size": 63488 00:15:00.164 } 00:15:00.164 ] 00:15:00.164 }' 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.164 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.741 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:00.741 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:00.741 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:00.741 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:00.741 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:00.741 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:00.741 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:00.741 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:00.742 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.742 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.742 [2024-10-15 09:14:44.575877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.742 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.742 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:00.742 "name": "Existed_Raid", 00:15:00.742 "aliases": [ 00:15:00.742 "ebbfb183-bccb-44d4-9f8b-2a6caa70b7c4" 00:15:00.742 ], 00:15:00.742 "product_name": "Raid Volume", 00:15:00.742 "block_size": 512, 00:15:00.742 "num_blocks": 190464, 00:15:00.742 "uuid": "ebbfb183-bccb-44d4-9f8b-2a6caa70b7c4", 00:15:00.742 "assigned_rate_limits": { 00:15:00.742 "rw_ios_per_sec": 0, 00:15:00.742 "rw_mbytes_per_sec": 0, 00:15:00.742 "r_mbytes_per_sec": 0, 00:15:00.742 "w_mbytes_per_sec": 0 00:15:00.742 }, 00:15:00.742 "claimed": false, 00:15:00.742 "zoned": false, 00:15:00.742 "supported_io_types": { 00:15:00.742 "read": true, 00:15:00.742 "write": true, 00:15:00.742 "unmap": true, 00:15:00.742 "flush": true, 00:15:00.742 "reset": true, 00:15:00.742 "nvme_admin": false, 00:15:00.742 "nvme_io": false, 00:15:00.742 "nvme_io_md": false, 00:15:00.742 "write_zeroes": true, 00:15:00.742 "zcopy": false, 00:15:00.742 "get_zone_info": false, 00:15:00.742 "zone_management": false, 00:15:00.742 "zone_append": false, 00:15:00.742 "compare": false, 00:15:00.742 "compare_and_write": false, 00:15:00.742 "abort": false, 00:15:00.742 "seek_hole": false, 00:15:00.742 "seek_data": false, 00:15:00.742 "copy": false, 00:15:00.742 "nvme_iov_md": false 00:15:00.742 }, 00:15:00.742 "memory_domains": [ 00:15:00.742 { 00:15:00.742 "dma_device_id": "system", 00:15:00.742 "dma_device_type": 1 00:15:00.742 }, 00:15:00.742 { 00:15:00.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.742 "dma_device_type": 2 00:15:00.742 }, 00:15:00.742 { 00:15:00.742 "dma_device_id": "system", 00:15:00.742 "dma_device_type": 1 00:15:00.742 }, 00:15:00.742 { 00:15:00.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.742 "dma_device_type": 2 00:15:00.742 }, 00:15:00.742 { 00:15:00.742 "dma_device_id": "system", 00:15:00.742 "dma_device_type": 1 00:15:00.742 }, 00:15:00.742 { 00:15:00.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.742 "dma_device_type": 2 00:15:00.742 } 00:15:00.742 ], 00:15:00.742 "driver_specific": { 00:15:00.742 "raid": { 00:15:00.742 "uuid": "ebbfb183-bccb-44d4-9f8b-2a6caa70b7c4", 00:15:00.742 "strip_size_kb": 64, 00:15:00.742 "state": "online", 00:15:00.742 "raid_level": "concat", 00:15:00.742 "superblock": true, 00:15:00.742 "num_base_bdevs": 3, 00:15:00.742 "num_base_bdevs_discovered": 3, 00:15:00.742 "num_base_bdevs_operational": 3, 00:15:00.742 "base_bdevs_list": [ 00:15:00.742 { 00:15:00.742 "name": "BaseBdev1", 00:15:00.742 "uuid": "5d3a0cc8-b085-49b8-8988-35a2b5f16b82", 00:15:00.742 "is_configured": true, 00:15:00.742 "data_offset": 2048, 00:15:00.742 "data_size": 63488 00:15:00.742 }, 00:15:00.742 { 00:15:00.742 "name": "BaseBdev2", 00:15:00.742 "uuid": "ffd8453f-721f-4420-96ae-1132b75a8930", 00:15:00.742 "is_configured": true, 00:15:00.742 "data_offset": 2048, 00:15:00.742 "data_size": 63488 00:15:00.742 }, 00:15:00.742 { 00:15:00.742 "name": "BaseBdev3", 00:15:00.742 "uuid": "2bc1b9e5-854f-4562-a74b-053c63d2f802", 00:15:00.742 "is_configured": true, 00:15:00.742 "data_offset": 2048, 00:15:00.742 "data_size": 63488 00:15:00.742 } 00:15:00.742 ] 00:15:00.742 } 00:15:00.742 } 00:15:00.742 }' 00:15:00.742 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:00.742 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:00.742 BaseBdev2 00:15:00.742 BaseBdev3' 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.001 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.001 [2024-10-15 09:14:44.879607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:01.001 [2024-10-15 09:14:44.879808] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.001 [2024-10-15 09:14:44.880023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.260 09:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.260 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.260 "name": "Existed_Raid", 00:15:01.260 "uuid": "ebbfb183-bccb-44d4-9f8b-2a6caa70b7c4", 00:15:01.260 "strip_size_kb": 64, 00:15:01.260 "state": "offline", 00:15:01.260 "raid_level": "concat", 00:15:01.260 "superblock": true, 00:15:01.260 "num_base_bdevs": 3, 00:15:01.260 "num_base_bdevs_discovered": 2, 00:15:01.260 "num_base_bdevs_operational": 2, 00:15:01.260 "base_bdevs_list": [ 00:15:01.260 { 00:15:01.260 "name": null, 00:15:01.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.260 "is_configured": false, 00:15:01.260 "data_offset": 0, 00:15:01.260 "data_size": 63488 00:15:01.260 }, 00:15:01.260 { 00:15:01.260 "name": "BaseBdev2", 00:15:01.260 "uuid": "ffd8453f-721f-4420-96ae-1132b75a8930", 00:15:01.260 "is_configured": true, 00:15:01.260 "data_offset": 2048, 00:15:01.260 "data_size": 63488 00:15:01.260 }, 00:15:01.260 { 00:15:01.260 "name": "BaseBdev3", 00:15:01.260 "uuid": "2bc1b9e5-854f-4562-a74b-053c63d2f802", 00:15:01.260 "is_configured": true, 00:15:01.260 "data_offset": 2048, 00:15:01.260 "data_size": 63488 00:15:01.260 } 00:15:01.260 ] 00:15:01.260 }' 00:15:01.260 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.260 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.828 [2024-10-15 09:14:45.533440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.828 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.828 [2024-10-15 09:14:45.677768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:01.828 [2024-10-15 09:14:45.677983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.088 BaseBdev2 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.088 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.089 [ 00:15:02.089 { 00:15:02.089 "name": "BaseBdev2", 00:15:02.089 "aliases": [ 00:15:02.089 "59667d50-41c3-403d-966f-e7930292d81f" 00:15:02.089 ], 00:15:02.089 "product_name": "Malloc disk", 00:15:02.089 "block_size": 512, 00:15:02.089 "num_blocks": 65536, 00:15:02.089 "uuid": "59667d50-41c3-403d-966f-e7930292d81f", 00:15:02.089 "assigned_rate_limits": { 00:15:02.089 "rw_ios_per_sec": 0, 00:15:02.089 "rw_mbytes_per_sec": 0, 00:15:02.089 "r_mbytes_per_sec": 0, 00:15:02.089 "w_mbytes_per_sec": 0 00:15:02.089 }, 00:15:02.089 "claimed": false, 00:15:02.089 "zoned": false, 00:15:02.089 "supported_io_types": { 00:15:02.089 "read": true, 00:15:02.089 "write": true, 00:15:02.089 "unmap": true, 00:15:02.089 "flush": true, 00:15:02.089 "reset": true, 00:15:02.089 "nvme_admin": false, 00:15:02.089 "nvme_io": false, 00:15:02.089 "nvme_io_md": false, 00:15:02.089 "write_zeroes": true, 00:15:02.089 "zcopy": true, 00:15:02.089 "get_zone_info": false, 00:15:02.089 "zone_management": false, 00:15:02.089 "zone_append": false, 00:15:02.089 "compare": false, 00:15:02.089 "compare_and_write": false, 00:15:02.089 "abort": true, 00:15:02.089 "seek_hole": false, 00:15:02.089 "seek_data": false, 00:15:02.089 "copy": true, 00:15:02.089 "nvme_iov_md": false 00:15:02.089 }, 00:15:02.089 "memory_domains": [ 00:15:02.089 { 00:15:02.089 "dma_device_id": "system", 00:15:02.089 "dma_device_type": 1 00:15:02.089 }, 00:15:02.089 { 00:15:02.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.089 "dma_device_type": 2 00:15:02.089 } 00:15:02.089 ], 00:15:02.089 "driver_specific": {} 00:15:02.089 } 00:15:02.089 ] 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.089 BaseBdev3 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.089 [ 00:15:02.089 { 00:15:02.089 "name": "BaseBdev3", 00:15:02.089 "aliases": [ 00:15:02.089 "bcdcdfb5-2b1e-4b8e-9e35-7579b0106cbe" 00:15:02.089 ], 00:15:02.089 "product_name": "Malloc disk", 00:15:02.089 "block_size": 512, 00:15:02.089 "num_blocks": 65536, 00:15:02.089 "uuid": "bcdcdfb5-2b1e-4b8e-9e35-7579b0106cbe", 00:15:02.089 "assigned_rate_limits": { 00:15:02.089 "rw_ios_per_sec": 0, 00:15:02.089 "rw_mbytes_per_sec": 0, 00:15:02.089 "r_mbytes_per_sec": 0, 00:15:02.089 "w_mbytes_per_sec": 0 00:15:02.089 }, 00:15:02.089 "claimed": false, 00:15:02.089 "zoned": false, 00:15:02.089 "supported_io_types": { 00:15:02.089 "read": true, 00:15:02.089 "write": true, 00:15:02.089 "unmap": true, 00:15:02.089 "flush": true, 00:15:02.089 "reset": true, 00:15:02.089 "nvme_admin": false, 00:15:02.089 "nvme_io": false, 00:15:02.089 "nvme_io_md": false, 00:15:02.089 "write_zeroes": true, 00:15:02.089 "zcopy": true, 00:15:02.089 "get_zone_info": false, 00:15:02.089 "zone_management": false, 00:15:02.089 "zone_append": false, 00:15:02.089 "compare": false, 00:15:02.089 "compare_and_write": false, 00:15:02.089 "abort": true, 00:15:02.089 "seek_hole": false, 00:15:02.089 "seek_data": false, 00:15:02.089 "copy": true, 00:15:02.089 "nvme_iov_md": false 00:15:02.089 }, 00:15:02.089 "memory_domains": [ 00:15:02.089 { 00:15:02.089 "dma_device_id": "system", 00:15:02.089 "dma_device_type": 1 00:15:02.089 }, 00:15:02.089 { 00:15:02.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.089 "dma_device_type": 2 00:15:02.089 } 00:15:02.089 ], 00:15:02.089 "driver_specific": {} 00:15:02.089 } 00:15:02.089 ] 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.089 [2024-10-15 09:14:45.987106] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:02.089 [2024-10-15 09:14:45.987355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:02.089 [2024-10-15 09:14:45.987510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.089 [2024-10-15 09:14:45.990298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.089 09:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.089 09:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.348 09:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.348 "name": "Existed_Raid", 00:15:02.348 "uuid": "97acfb98-5349-4b2f-b4c0-8a0d472f797a", 00:15:02.348 "strip_size_kb": 64, 00:15:02.348 "state": "configuring", 00:15:02.348 "raid_level": "concat", 00:15:02.348 "superblock": true, 00:15:02.348 "num_base_bdevs": 3, 00:15:02.348 "num_base_bdevs_discovered": 2, 00:15:02.348 "num_base_bdevs_operational": 3, 00:15:02.348 "base_bdevs_list": [ 00:15:02.348 { 00:15:02.348 "name": "BaseBdev1", 00:15:02.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.348 "is_configured": false, 00:15:02.348 "data_offset": 0, 00:15:02.348 "data_size": 0 00:15:02.348 }, 00:15:02.348 { 00:15:02.348 "name": "BaseBdev2", 00:15:02.348 "uuid": "59667d50-41c3-403d-966f-e7930292d81f", 00:15:02.348 "is_configured": true, 00:15:02.348 "data_offset": 2048, 00:15:02.348 "data_size": 63488 00:15:02.348 }, 00:15:02.348 { 00:15:02.348 "name": "BaseBdev3", 00:15:02.348 "uuid": "bcdcdfb5-2b1e-4b8e-9e35-7579b0106cbe", 00:15:02.348 "is_configured": true, 00:15:02.348 "data_offset": 2048, 00:15:02.348 "data_size": 63488 00:15:02.348 } 00:15:02.348 ] 00:15:02.348 }' 00:15:02.348 09:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.348 09:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.607 09:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:02.607 09:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.607 09:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.607 [2024-10-15 09:14:46.523193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:02.607 09:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.607 09:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:02.607 09:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.607 09:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.607 09:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:02.607 09:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.607 09:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.607 09:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.607 09:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.607 09:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.607 09:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.607 09:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.607 09:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.607 09:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.607 09:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.866 09:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.866 09:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.866 "name": "Existed_Raid", 00:15:02.866 "uuid": "97acfb98-5349-4b2f-b4c0-8a0d472f797a", 00:15:02.866 "strip_size_kb": 64, 00:15:02.866 "state": "configuring", 00:15:02.866 "raid_level": "concat", 00:15:02.866 "superblock": true, 00:15:02.866 "num_base_bdevs": 3, 00:15:02.866 "num_base_bdevs_discovered": 1, 00:15:02.866 "num_base_bdevs_operational": 3, 00:15:02.866 "base_bdevs_list": [ 00:15:02.866 { 00:15:02.866 "name": "BaseBdev1", 00:15:02.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.866 "is_configured": false, 00:15:02.866 "data_offset": 0, 00:15:02.866 "data_size": 0 00:15:02.866 }, 00:15:02.866 { 00:15:02.866 "name": null, 00:15:02.866 "uuid": "59667d50-41c3-403d-966f-e7930292d81f", 00:15:02.866 "is_configured": false, 00:15:02.866 "data_offset": 0, 00:15:02.866 "data_size": 63488 00:15:02.866 }, 00:15:02.866 { 00:15:02.866 "name": "BaseBdev3", 00:15:02.866 "uuid": "bcdcdfb5-2b1e-4b8e-9e35-7579b0106cbe", 00:15:02.866 "is_configured": true, 00:15:02.866 "data_offset": 2048, 00:15:02.866 "data_size": 63488 00:15:02.866 } 00:15:02.866 ] 00:15:02.866 }' 00:15:02.866 09:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.866 09:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.171 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.171 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:03.171 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.171 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.171 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.430 [2024-10-15 09:14:47.153517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:03.430 BaseBdev1 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.430 [ 00:15:03.430 { 00:15:03.430 "name": "BaseBdev1", 00:15:03.430 "aliases": [ 00:15:03.430 "9eaf4004-7ed6-4ac2-b087-55bc51600f50" 00:15:03.430 ], 00:15:03.430 "product_name": "Malloc disk", 00:15:03.430 "block_size": 512, 00:15:03.430 "num_blocks": 65536, 00:15:03.430 "uuid": "9eaf4004-7ed6-4ac2-b087-55bc51600f50", 00:15:03.430 "assigned_rate_limits": { 00:15:03.430 "rw_ios_per_sec": 0, 00:15:03.430 "rw_mbytes_per_sec": 0, 00:15:03.430 "r_mbytes_per_sec": 0, 00:15:03.430 "w_mbytes_per_sec": 0 00:15:03.430 }, 00:15:03.430 "claimed": true, 00:15:03.430 "claim_type": "exclusive_write", 00:15:03.430 "zoned": false, 00:15:03.430 "supported_io_types": { 00:15:03.430 "read": true, 00:15:03.430 "write": true, 00:15:03.430 "unmap": true, 00:15:03.430 "flush": true, 00:15:03.430 "reset": true, 00:15:03.430 "nvme_admin": false, 00:15:03.430 "nvme_io": false, 00:15:03.430 "nvme_io_md": false, 00:15:03.430 "write_zeroes": true, 00:15:03.430 "zcopy": true, 00:15:03.430 "get_zone_info": false, 00:15:03.430 "zone_management": false, 00:15:03.430 "zone_append": false, 00:15:03.430 "compare": false, 00:15:03.430 "compare_and_write": false, 00:15:03.430 "abort": true, 00:15:03.430 "seek_hole": false, 00:15:03.430 "seek_data": false, 00:15:03.430 "copy": true, 00:15:03.430 "nvme_iov_md": false 00:15:03.430 }, 00:15:03.430 "memory_domains": [ 00:15:03.430 { 00:15:03.430 "dma_device_id": "system", 00:15:03.430 "dma_device_type": 1 00:15:03.430 }, 00:15:03.430 { 00:15:03.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.430 "dma_device_type": 2 00:15:03.430 } 00:15:03.430 ], 00:15:03.430 "driver_specific": {} 00:15:03.430 } 00:15:03.430 ] 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.430 "name": "Existed_Raid", 00:15:03.430 "uuid": "97acfb98-5349-4b2f-b4c0-8a0d472f797a", 00:15:03.430 "strip_size_kb": 64, 00:15:03.430 "state": "configuring", 00:15:03.430 "raid_level": "concat", 00:15:03.430 "superblock": true, 00:15:03.430 "num_base_bdevs": 3, 00:15:03.430 "num_base_bdevs_discovered": 2, 00:15:03.430 "num_base_bdevs_operational": 3, 00:15:03.430 "base_bdevs_list": [ 00:15:03.430 { 00:15:03.430 "name": "BaseBdev1", 00:15:03.430 "uuid": "9eaf4004-7ed6-4ac2-b087-55bc51600f50", 00:15:03.430 "is_configured": true, 00:15:03.430 "data_offset": 2048, 00:15:03.430 "data_size": 63488 00:15:03.430 }, 00:15:03.430 { 00:15:03.430 "name": null, 00:15:03.430 "uuid": "59667d50-41c3-403d-966f-e7930292d81f", 00:15:03.430 "is_configured": false, 00:15:03.430 "data_offset": 0, 00:15:03.430 "data_size": 63488 00:15:03.430 }, 00:15:03.430 { 00:15:03.430 "name": "BaseBdev3", 00:15:03.430 "uuid": "bcdcdfb5-2b1e-4b8e-9e35-7579b0106cbe", 00:15:03.430 "is_configured": true, 00:15:03.430 "data_offset": 2048, 00:15:03.430 "data_size": 63488 00:15:03.430 } 00:15:03.430 ] 00:15:03.430 }' 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.430 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.997 [2024-10-15 09:14:47.741796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.997 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.998 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.998 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.998 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.998 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.998 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.998 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.998 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.998 "name": "Existed_Raid", 00:15:03.998 "uuid": "97acfb98-5349-4b2f-b4c0-8a0d472f797a", 00:15:03.998 "strip_size_kb": 64, 00:15:03.998 "state": "configuring", 00:15:03.998 "raid_level": "concat", 00:15:03.998 "superblock": true, 00:15:03.998 "num_base_bdevs": 3, 00:15:03.998 "num_base_bdevs_discovered": 1, 00:15:03.998 "num_base_bdevs_operational": 3, 00:15:03.998 "base_bdevs_list": [ 00:15:03.998 { 00:15:03.998 "name": "BaseBdev1", 00:15:03.998 "uuid": "9eaf4004-7ed6-4ac2-b087-55bc51600f50", 00:15:03.998 "is_configured": true, 00:15:03.998 "data_offset": 2048, 00:15:03.998 "data_size": 63488 00:15:03.998 }, 00:15:03.998 { 00:15:03.998 "name": null, 00:15:03.998 "uuid": "59667d50-41c3-403d-966f-e7930292d81f", 00:15:03.998 "is_configured": false, 00:15:03.998 "data_offset": 0, 00:15:03.998 "data_size": 63488 00:15:03.998 }, 00:15:03.998 { 00:15:03.998 "name": null, 00:15:03.998 "uuid": "bcdcdfb5-2b1e-4b8e-9e35-7579b0106cbe", 00:15:03.998 "is_configured": false, 00:15:03.998 "data_offset": 0, 00:15:03.998 "data_size": 63488 00:15:03.998 } 00:15:03.998 ] 00:15:03.998 }' 00:15:03.998 09:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.998 09:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.563 [2024-10-15 09:14:48.330016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.563 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.563 "name": "Existed_Raid", 00:15:04.563 "uuid": "97acfb98-5349-4b2f-b4c0-8a0d472f797a", 00:15:04.563 "strip_size_kb": 64, 00:15:04.563 "state": "configuring", 00:15:04.563 "raid_level": "concat", 00:15:04.563 "superblock": true, 00:15:04.563 "num_base_bdevs": 3, 00:15:04.563 "num_base_bdevs_discovered": 2, 00:15:04.563 "num_base_bdevs_operational": 3, 00:15:04.563 "base_bdevs_list": [ 00:15:04.563 { 00:15:04.563 "name": "BaseBdev1", 00:15:04.563 "uuid": "9eaf4004-7ed6-4ac2-b087-55bc51600f50", 00:15:04.563 "is_configured": true, 00:15:04.563 "data_offset": 2048, 00:15:04.563 "data_size": 63488 00:15:04.563 }, 00:15:04.563 { 00:15:04.563 "name": null, 00:15:04.563 "uuid": "59667d50-41c3-403d-966f-e7930292d81f", 00:15:04.563 "is_configured": false, 00:15:04.563 "data_offset": 0, 00:15:04.563 "data_size": 63488 00:15:04.563 }, 00:15:04.564 { 00:15:04.564 "name": "BaseBdev3", 00:15:04.564 "uuid": "bcdcdfb5-2b1e-4b8e-9e35-7579b0106cbe", 00:15:04.564 "is_configured": true, 00:15:04.564 "data_offset": 2048, 00:15:04.564 "data_size": 63488 00:15:04.564 } 00:15:04.564 ] 00:15:04.564 }' 00:15:04.564 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.564 09:14:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.131 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.131 09:14:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.131 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:05.131 09:14:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.131 09:14:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.131 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:05.131 09:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:05.131 09:14:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.131 09:14:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.131 [2024-10-15 09:14:48.926240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:05.131 09:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.131 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:05.131 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.131 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.131 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:05.131 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.131 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.131 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.131 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.131 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.131 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.131 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.131 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.131 09:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.131 09:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.131 09:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.390 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.390 "name": "Existed_Raid", 00:15:05.390 "uuid": "97acfb98-5349-4b2f-b4c0-8a0d472f797a", 00:15:05.390 "strip_size_kb": 64, 00:15:05.390 "state": "configuring", 00:15:05.390 "raid_level": "concat", 00:15:05.390 "superblock": true, 00:15:05.390 "num_base_bdevs": 3, 00:15:05.390 "num_base_bdevs_discovered": 1, 00:15:05.390 "num_base_bdevs_operational": 3, 00:15:05.390 "base_bdevs_list": [ 00:15:05.390 { 00:15:05.390 "name": null, 00:15:05.390 "uuid": "9eaf4004-7ed6-4ac2-b087-55bc51600f50", 00:15:05.390 "is_configured": false, 00:15:05.390 "data_offset": 0, 00:15:05.390 "data_size": 63488 00:15:05.390 }, 00:15:05.390 { 00:15:05.390 "name": null, 00:15:05.390 "uuid": "59667d50-41c3-403d-966f-e7930292d81f", 00:15:05.390 "is_configured": false, 00:15:05.390 "data_offset": 0, 00:15:05.390 "data_size": 63488 00:15:05.390 }, 00:15:05.390 { 00:15:05.390 "name": "BaseBdev3", 00:15:05.390 "uuid": "bcdcdfb5-2b1e-4b8e-9e35-7579b0106cbe", 00:15:05.390 "is_configured": true, 00:15:05.390 "data_offset": 2048, 00:15:05.390 "data_size": 63488 00:15:05.390 } 00:15:05.390 ] 00:15:05.390 }' 00:15:05.390 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.390 09:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.649 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.649 09:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.649 09:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.649 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:05.649 09:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.909 [2024-10-15 09:14:49.583602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.909 "name": "Existed_Raid", 00:15:05.909 "uuid": "97acfb98-5349-4b2f-b4c0-8a0d472f797a", 00:15:05.909 "strip_size_kb": 64, 00:15:05.909 "state": "configuring", 00:15:05.909 "raid_level": "concat", 00:15:05.909 "superblock": true, 00:15:05.909 "num_base_bdevs": 3, 00:15:05.909 "num_base_bdevs_discovered": 2, 00:15:05.909 "num_base_bdevs_operational": 3, 00:15:05.909 "base_bdevs_list": [ 00:15:05.909 { 00:15:05.909 "name": null, 00:15:05.909 "uuid": "9eaf4004-7ed6-4ac2-b087-55bc51600f50", 00:15:05.909 "is_configured": false, 00:15:05.909 "data_offset": 0, 00:15:05.909 "data_size": 63488 00:15:05.909 }, 00:15:05.909 { 00:15:05.909 "name": "BaseBdev2", 00:15:05.909 "uuid": "59667d50-41c3-403d-966f-e7930292d81f", 00:15:05.909 "is_configured": true, 00:15:05.909 "data_offset": 2048, 00:15:05.909 "data_size": 63488 00:15:05.909 }, 00:15:05.909 { 00:15:05.909 "name": "BaseBdev3", 00:15:05.909 "uuid": "bcdcdfb5-2b1e-4b8e-9e35-7579b0106cbe", 00:15:05.909 "is_configured": true, 00:15:05.909 "data_offset": 2048, 00:15:05.909 "data_size": 63488 00:15:05.909 } 00:15:05.909 ] 00:15:05.909 }' 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.909 09:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9eaf4004-7ed6-4ac2-b087-55bc51600f50 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.487 [2024-10-15 09:14:50.274459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:06.487 NewBaseBdev 00:15:06.487 [2024-10-15 09:14:50.275033] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:06.487 [2024-10-15 09:14:50.275066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:06.487 [2024-10-15 09:14:50.275426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:06.487 [2024-10-15 09:14:50.275653] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:06.487 [2024-10-15 09:14:50.275671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:06.487 [2024-10-15 09:14:50.275851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.487 [ 00:15:06.487 { 00:15:06.487 "name": "NewBaseBdev", 00:15:06.487 "aliases": [ 00:15:06.487 "9eaf4004-7ed6-4ac2-b087-55bc51600f50" 00:15:06.487 ], 00:15:06.487 "product_name": "Malloc disk", 00:15:06.487 "block_size": 512, 00:15:06.487 "num_blocks": 65536, 00:15:06.487 "uuid": "9eaf4004-7ed6-4ac2-b087-55bc51600f50", 00:15:06.487 "assigned_rate_limits": { 00:15:06.487 "rw_ios_per_sec": 0, 00:15:06.487 "rw_mbytes_per_sec": 0, 00:15:06.487 "r_mbytes_per_sec": 0, 00:15:06.487 "w_mbytes_per_sec": 0 00:15:06.487 }, 00:15:06.487 "claimed": true, 00:15:06.487 "claim_type": "exclusive_write", 00:15:06.487 "zoned": false, 00:15:06.487 "supported_io_types": { 00:15:06.487 "read": true, 00:15:06.487 "write": true, 00:15:06.487 "unmap": true, 00:15:06.487 "flush": true, 00:15:06.487 "reset": true, 00:15:06.487 "nvme_admin": false, 00:15:06.487 "nvme_io": false, 00:15:06.487 "nvme_io_md": false, 00:15:06.487 "write_zeroes": true, 00:15:06.487 "zcopy": true, 00:15:06.487 "get_zone_info": false, 00:15:06.487 "zone_management": false, 00:15:06.487 "zone_append": false, 00:15:06.487 "compare": false, 00:15:06.487 "compare_and_write": false, 00:15:06.487 "abort": true, 00:15:06.487 "seek_hole": false, 00:15:06.487 "seek_data": false, 00:15:06.487 "copy": true, 00:15:06.487 "nvme_iov_md": false 00:15:06.487 }, 00:15:06.487 "memory_domains": [ 00:15:06.487 { 00:15:06.487 "dma_device_id": "system", 00:15:06.487 "dma_device_type": 1 00:15:06.487 }, 00:15:06.487 { 00:15:06.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.487 "dma_device_type": 2 00:15:06.487 } 00:15:06.487 ], 00:15:06.487 "driver_specific": {} 00:15:06.487 } 00:15:06.487 ] 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.487 "name": "Existed_Raid", 00:15:06.487 "uuid": "97acfb98-5349-4b2f-b4c0-8a0d472f797a", 00:15:06.487 "strip_size_kb": 64, 00:15:06.487 "state": "online", 00:15:06.487 "raid_level": "concat", 00:15:06.487 "superblock": true, 00:15:06.487 "num_base_bdevs": 3, 00:15:06.487 "num_base_bdevs_discovered": 3, 00:15:06.487 "num_base_bdevs_operational": 3, 00:15:06.487 "base_bdevs_list": [ 00:15:06.487 { 00:15:06.487 "name": "NewBaseBdev", 00:15:06.487 "uuid": "9eaf4004-7ed6-4ac2-b087-55bc51600f50", 00:15:06.487 "is_configured": true, 00:15:06.487 "data_offset": 2048, 00:15:06.487 "data_size": 63488 00:15:06.487 }, 00:15:06.487 { 00:15:06.487 "name": "BaseBdev2", 00:15:06.487 "uuid": "59667d50-41c3-403d-966f-e7930292d81f", 00:15:06.487 "is_configured": true, 00:15:06.487 "data_offset": 2048, 00:15:06.487 "data_size": 63488 00:15:06.487 }, 00:15:06.487 { 00:15:06.487 "name": "BaseBdev3", 00:15:06.487 "uuid": "bcdcdfb5-2b1e-4b8e-9e35-7579b0106cbe", 00:15:06.487 "is_configured": true, 00:15:06.487 "data_offset": 2048, 00:15:06.487 "data_size": 63488 00:15:06.487 } 00:15:06.487 ] 00:15:06.487 }' 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.487 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.055 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:07.055 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:07.055 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:07.055 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:07.055 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:07.055 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:07.055 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:07.055 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:07.055 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.055 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.055 [2024-10-15 09:14:50.843079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.055 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.055 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:07.055 "name": "Existed_Raid", 00:15:07.055 "aliases": [ 00:15:07.055 "97acfb98-5349-4b2f-b4c0-8a0d472f797a" 00:15:07.055 ], 00:15:07.055 "product_name": "Raid Volume", 00:15:07.055 "block_size": 512, 00:15:07.055 "num_blocks": 190464, 00:15:07.055 "uuid": "97acfb98-5349-4b2f-b4c0-8a0d472f797a", 00:15:07.055 "assigned_rate_limits": { 00:15:07.055 "rw_ios_per_sec": 0, 00:15:07.055 "rw_mbytes_per_sec": 0, 00:15:07.055 "r_mbytes_per_sec": 0, 00:15:07.055 "w_mbytes_per_sec": 0 00:15:07.055 }, 00:15:07.055 "claimed": false, 00:15:07.055 "zoned": false, 00:15:07.055 "supported_io_types": { 00:15:07.055 "read": true, 00:15:07.055 "write": true, 00:15:07.055 "unmap": true, 00:15:07.055 "flush": true, 00:15:07.055 "reset": true, 00:15:07.055 "nvme_admin": false, 00:15:07.055 "nvme_io": false, 00:15:07.055 "nvme_io_md": false, 00:15:07.055 "write_zeroes": true, 00:15:07.055 "zcopy": false, 00:15:07.055 "get_zone_info": false, 00:15:07.055 "zone_management": false, 00:15:07.055 "zone_append": false, 00:15:07.055 "compare": false, 00:15:07.055 "compare_and_write": false, 00:15:07.055 "abort": false, 00:15:07.055 "seek_hole": false, 00:15:07.055 "seek_data": false, 00:15:07.055 "copy": false, 00:15:07.055 "nvme_iov_md": false 00:15:07.055 }, 00:15:07.055 "memory_domains": [ 00:15:07.055 { 00:15:07.055 "dma_device_id": "system", 00:15:07.055 "dma_device_type": 1 00:15:07.055 }, 00:15:07.055 { 00:15:07.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.055 "dma_device_type": 2 00:15:07.055 }, 00:15:07.055 { 00:15:07.055 "dma_device_id": "system", 00:15:07.055 "dma_device_type": 1 00:15:07.055 }, 00:15:07.055 { 00:15:07.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.055 "dma_device_type": 2 00:15:07.055 }, 00:15:07.055 { 00:15:07.055 "dma_device_id": "system", 00:15:07.055 "dma_device_type": 1 00:15:07.055 }, 00:15:07.055 { 00:15:07.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.055 "dma_device_type": 2 00:15:07.055 } 00:15:07.055 ], 00:15:07.055 "driver_specific": { 00:15:07.055 "raid": { 00:15:07.055 "uuid": "97acfb98-5349-4b2f-b4c0-8a0d472f797a", 00:15:07.055 "strip_size_kb": 64, 00:15:07.055 "state": "online", 00:15:07.055 "raid_level": "concat", 00:15:07.055 "superblock": true, 00:15:07.055 "num_base_bdevs": 3, 00:15:07.055 "num_base_bdevs_discovered": 3, 00:15:07.055 "num_base_bdevs_operational": 3, 00:15:07.055 "base_bdevs_list": [ 00:15:07.055 { 00:15:07.055 "name": "NewBaseBdev", 00:15:07.055 "uuid": "9eaf4004-7ed6-4ac2-b087-55bc51600f50", 00:15:07.055 "is_configured": true, 00:15:07.055 "data_offset": 2048, 00:15:07.055 "data_size": 63488 00:15:07.055 }, 00:15:07.055 { 00:15:07.055 "name": "BaseBdev2", 00:15:07.055 "uuid": "59667d50-41c3-403d-966f-e7930292d81f", 00:15:07.055 "is_configured": true, 00:15:07.055 "data_offset": 2048, 00:15:07.055 "data_size": 63488 00:15:07.055 }, 00:15:07.055 { 00:15:07.055 "name": "BaseBdev3", 00:15:07.055 "uuid": "bcdcdfb5-2b1e-4b8e-9e35-7579b0106cbe", 00:15:07.055 "is_configured": true, 00:15:07.055 "data_offset": 2048, 00:15:07.055 "data_size": 63488 00:15:07.055 } 00:15:07.055 ] 00:15:07.055 } 00:15:07.055 } 00:15:07.055 }' 00:15:07.055 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:07.055 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:07.055 BaseBdev2 00:15:07.055 BaseBdev3' 00:15:07.055 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.314 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:07.314 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.314 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:07.314 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.314 09:14:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.314 09:14:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.314 [2024-10-15 09:14:51.206762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:07.314 [2024-10-15 09:14:51.206926] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.314 [2024-10-15 09:14:51.207172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.314 [2024-10-15 09:14:51.207371] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.314 [2024-10-15 09:14:51.207409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66465 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 66465 ']' 00:15:07.314 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 66465 00:15:07.315 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:07.315 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.315 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66465 00:15:07.573 killing process with pid 66465 00:15:07.573 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:07.573 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:07.573 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66465' 00:15:07.573 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 66465 00:15:07.573 [2024-10-15 09:14:51.247447] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.573 09:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 66465 00:15:07.832 [2024-10-15 09:14:51.536352] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:08.768 09:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:08.768 00:15:08.768 real 0m12.121s 00:15:08.768 user 0m19.861s 00:15:08.768 sys 0m1.752s 00:15:08.768 09:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:08.768 09:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.768 ************************************ 00:15:08.768 END TEST raid_state_function_test_sb 00:15:08.768 ************************************ 00:15:09.027 09:14:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:15:09.027 09:14:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:09.027 09:14:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:09.027 09:14:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.027 ************************************ 00:15:09.027 START TEST raid_superblock_test 00:15:09.027 ************************************ 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67106 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67106 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 67106 ']' 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:09.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:09.027 09:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.027 [2024-10-15 09:14:52.836803] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:15:09.027 [2024-10-15 09:14:52.836969] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67106 ] 00:15:09.287 [2024-10-15 09:14:53.006853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.287 [2024-10-15 09:14:53.157870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.547 [2024-10-15 09:14:53.389446] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.547 [2024-10-15 09:14:53.389527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.114 malloc1 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.114 [2024-10-15 09:14:53.916883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:10.114 [2024-10-15 09:14:53.916972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.114 [2024-10-15 09:14:53.917009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:10.114 [2024-10-15 09:14:53.917027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.114 [2024-10-15 09:14:53.920063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.114 [2024-10-15 09:14:53.920106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:10.114 pt1 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.114 malloc2 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.114 09:14:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.115 [2024-10-15 09:14:53.974318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:10.115 [2024-10-15 09:14:53.974392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.115 [2024-10-15 09:14:53.974424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:10.115 [2024-10-15 09:14:53.974456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.115 [2024-10-15 09:14:53.977454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.115 [2024-10-15 09:14:53.977511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:10.115 pt2 00:15:10.115 09:14:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.115 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:10.115 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:10.115 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:10.115 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:10.115 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:10.115 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:10.115 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:10.115 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:10.115 09:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:10.115 09:14:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.115 09:14:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.115 malloc3 00:15:10.115 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.115 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:10.115 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.115 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.115 [2024-10-15 09:14:54.040681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:10.115 [2024-10-15 09:14:54.040748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.115 [2024-10-15 09:14:54.040783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:10.115 [2024-10-15 09:14:54.040801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.374 [2024-10-15 09:14:54.044118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.374 [2024-10-15 09:14:54.044206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:10.374 pt3 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.374 [2024-10-15 09:14:54.048946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:10.374 [2024-10-15 09:14:54.051750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:10.374 [2024-10-15 09:14:54.051865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:10.374 [2024-10-15 09:14:54.052092] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:10.374 [2024-10-15 09:14:54.052130] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:10.374 [2024-10-15 09:14:54.052450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:10.374 [2024-10-15 09:14:54.052677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:10.374 [2024-10-15 09:14:54.052701] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:10.374 [2024-10-15 09:14:54.052946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.374 "name": "raid_bdev1", 00:15:10.374 "uuid": "4df39bba-4563-48a8-aa05-64dde6ea5dd1", 00:15:10.374 "strip_size_kb": 64, 00:15:10.374 "state": "online", 00:15:10.374 "raid_level": "concat", 00:15:10.374 "superblock": true, 00:15:10.374 "num_base_bdevs": 3, 00:15:10.374 "num_base_bdevs_discovered": 3, 00:15:10.374 "num_base_bdevs_operational": 3, 00:15:10.374 "base_bdevs_list": [ 00:15:10.374 { 00:15:10.374 "name": "pt1", 00:15:10.374 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.374 "is_configured": true, 00:15:10.374 "data_offset": 2048, 00:15:10.374 "data_size": 63488 00:15:10.374 }, 00:15:10.374 { 00:15:10.374 "name": "pt2", 00:15:10.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.374 "is_configured": true, 00:15:10.374 "data_offset": 2048, 00:15:10.374 "data_size": 63488 00:15:10.374 }, 00:15:10.374 { 00:15:10.374 "name": "pt3", 00:15:10.374 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.374 "is_configured": true, 00:15:10.374 "data_offset": 2048, 00:15:10.374 "data_size": 63488 00:15:10.374 } 00:15:10.374 ] 00:15:10.374 }' 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.374 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.940 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:10.940 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:10.940 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:10.940 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:10.940 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:10.940 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:10.940 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:10.940 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.940 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:10.940 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.940 [2024-10-15 09:14:54.589555] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.940 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.940 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:10.940 "name": "raid_bdev1", 00:15:10.940 "aliases": [ 00:15:10.940 "4df39bba-4563-48a8-aa05-64dde6ea5dd1" 00:15:10.940 ], 00:15:10.940 "product_name": "Raid Volume", 00:15:10.940 "block_size": 512, 00:15:10.940 "num_blocks": 190464, 00:15:10.940 "uuid": "4df39bba-4563-48a8-aa05-64dde6ea5dd1", 00:15:10.940 "assigned_rate_limits": { 00:15:10.940 "rw_ios_per_sec": 0, 00:15:10.940 "rw_mbytes_per_sec": 0, 00:15:10.940 "r_mbytes_per_sec": 0, 00:15:10.940 "w_mbytes_per_sec": 0 00:15:10.940 }, 00:15:10.940 "claimed": false, 00:15:10.940 "zoned": false, 00:15:10.940 "supported_io_types": { 00:15:10.940 "read": true, 00:15:10.940 "write": true, 00:15:10.940 "unmap": true, 00:15:10.940 "flush": true, 00:15:10.940 "reset": true, 00:15:10.940 "nvme_admin": false, 00:15:10.940 "nvme_io": false, 00:15:10.940 "nvme_io_md": false, 00:15:10.940 "write_zeroes": true, 00:15:10.940 "zcopy": false, 00:15:10.940 "get_zone_info": false, 00:15:10.940 "zone_management": false, 00:15:10.940 "zone_append": false, 00:15:10.940 "compare": false, 00:15:10.940 "compare_and_write": false, 00:15:10.940 "abort": false, 00:15:10.940 "seek_hole": false, 00:15:10.940 "seek_data": false, 00:15:10.940 "copy": false, 00:15:10.940 "nvme_iov_md": false 00:15:10.940 }, 00:15:10.940 "memory_domains": [ 00:15:10.940 { 00:15:10.940 "dma_device_id": "system", 00:15:10.940 "dma_device_type": 1 00:15:10.940 }, 00:15:10.940 { 00:15:10.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.940 "dma_device_type": 2 00:15:10.940 }, 00:15:10.940 { 00:15:10.940 "dma_device_id": "system", 00:15:10.940 "dma_device_type": 1 00:15:10.940 }, 00:15:10.940 { 00:15:10.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.941 "dma_device_type": 2 00:15:10.941 }, 00:15:10.941 { 00:15:10.941 "dma_device_id": "system", 00:15:10.941 "dma_device_type": 1 00:15:10.941 }, 00:15:10.941 { 00:15:10.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.941 "dma_device_type": 2 00:15:10.941 } 00:15:10.941 ], 00:15:10.941 "driver_specific": { 00:15:10.941 "raid": { 00:15:10.941 "uuid": "4df39bba-4563-48a8-aa05-64dde6ea5dd1", 00:15:10.941 "strip_size_kb": 64, 00:15:10.941 "state": "online", 00:15:10.941 "raid_level": "concat", 00:15:10.941 "superblock": true, 00:15:10.941 "num_base_bdevs": 3, 00:15:10.941 "num_base_bdevs_discovered": 3, 00:15:10.941 "num_base_bdevs_operational": 3, 00:15:10.941 "base_bdevs_list": [ 00:15:10.941 { 00:15:10.941 "name": "pt1", 00:15:10.941 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.941 "is_configured": true, 00:15:10.941 "data_offset": 2048, 00:15:10.941 "data_size": 63488 00:15:10.941 }, 00:15:10.941 { 00:15:10.941 "name": "pt2", 00:15:10.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.941 "is_configured": true, 00:15:10.941 "data_offset": 2048, 00:15:10.941 "data_size": 63488 00:15:10.941 }, 00:15:10.941 { 00:15:10.941 "name": "pt3", 00:15:10.941 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.941 "is_configured": true, 00:15:10.941 "data_offset": 2048, 00:15:10.941 "data_size": 63488 00:15:10.941 } 00:15:10.941 ] 00:15:10.941 } 00:15:10.941 } 00:15:10.941 }' 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:10.941 pt2 00:15:10.941 pt3' 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.941 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.200 [2024-10-15 09:14:54.905552] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4df39bba-4563-48a8-aa05-64dde6ea5dd1 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4df39bba-4563-48a8-aa05-64dde6ea5dd1 ']' 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.200 [2024-10-15 09:14:54.953168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.200 [2024-10-15 09:14:54.953224] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.200 [2024-10-15 09:14:54.953337] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.200 [2024-10-15 09:14:54.953437] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.200 [2024-10-15 09:14:54.953453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.200 09:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.200 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.200 [2024-10-15 09:14:55.097287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:11.200 [2024-10-15 09:14:55.100189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:11.200 [2024-10-15 09:14:55.100397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:11.200 [2024-10-15 09:14:55.100489] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:11.200 [2024-10-15 09:14:55.100564] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:11.201 [2024-10-15 09:14:55.100600] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:11.201 [2024-10-15 09:14:55.100630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.201 [2024-10-15 09:14:55.100648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:11.201 request: 00:15:11.201 { 00:15:11.201 "name": "raid_bdev1", 00:15:11.201 "raid_level": "concat", 00:15:11.201 "base_bdevs": [ 00:15:11.201 "malloc1", 00:15:11.201 "malloc2", 00:15:11.201 "malloc3" 00:15:11.201 ], 00:15:11.201 "strip_size_kb": 64, 00:15:11.201 "superblock": false, 00:15:11.201 "method": "bdev_raid_create", 00:15:11.201 "req_id": 1 00:15:11.201 } 00:15:11.201 Got JSON-RPC error response 00:15:11.201 response: 00:15:11.201 { 00:15:11.201 "code": -17, 00:15:11.201 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:11.201 } 00:15:11.201 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:11.201 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:11.201 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:11.201 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:11.201 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:11.201 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.201 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.201 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.201 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:11.201 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.459 [2024-10-15 09:14:55.165490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:11.459 [2024-10-15 09:14:55.165584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.459 [2024-10-15 09:14:55.165621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:11.459 [2024-10-15 09:14:55.165638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.459 [2024-10-15 09:14:55.168858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.459 [2024-10-15 09:14:55.168906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:11.459 [2024-10-15 09:14:55.169037] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:11.459 [2024-10-15 09:14:55.169142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:11.459 pt1 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.459 "name": "raid_bdev1", 00:15:11.459 "uuid": "4df39bba-4563-48a8-aa05-64dde6ea5dd1", 00:15:11.459 "strip_size_kb": 64, 00:15:11.459 "state": "configuring", 00:15:11.459 "raid_level": "concat", 00:15:11.459 "superblock": true, 00:15:11.459 "num_base_bdevs": 3, 00:15:11.459 "num_base_bdevs_discovered": 1, 00:15:11.459 "num_base_bdevs_operational": 3, 00:15:11.459 "base_bdevs_list": [ 00:15:11.459 { 00:15:11.459 "name": "pt1", 00:15:11.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.459 "is_configured": true, 00:15:11.459 "data_offset": 2048, 00:15:11.459 "data_size": 63488 00:15:11.459 }, 00:15:11.459 { 00:15:11.459 "name": null, 00:15:11.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.459 "is_configured": false, 00:15:11.459 "data_offset": 2048, 00:15:11.459 "data_size": 63488 00:15:11.459 }, 00:15:11.459 { 00:15:11.459 "name": null, 00:15:11.459 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.459 "is_configured": false, 00:15:11.459 "data_offset": 2048, 00:15:11.459 "data_size": 63488 00:15:11.459 } 00:15:11.459 ] 00:15:11.459 }' 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.459 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.025 [2024-10-15 09:14:55.693713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:12.025 [2024-10-15 09:14:55.693956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.025 [2024-10-15 09:14:55.694008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:12.025 [2024-10-15 09:14:55.694027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.025 [2024-10-15 09:14:55.694683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.025 [2024-10-15 09:14:55.694727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:12.025 [2024-10-15 09:14:55.694854] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:12.025 [2024-10-15 09:14:55.694888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:12.025 pt2 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.025 [2024-10-15 09:14:55.701685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.025 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.025 "name": "raid_bdev1", 00:15:12.025 "uuid": "4df39bba-4563-48a8-aa05-64dde6ea5dd1", 00:15:12.025 "strip_size_kb": 64, 00:15:12.025 "state": "configuring", 00:15:12.025 "raid_level": "concat", 00:15:12.025 "superblock": true, 00:15:12.025 "num_base_bdevs": 3, 00:15:12.025 "num_base_bdevs_discovered": 1, 00:15:12.025 "num_base_bdevs_operational": 3, 00:15:12.025 "base_bdevs_list": [ 00:15:12.025 { 00:15:12.026 "name": "pt1", 00:15:12.026 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:12.026 "is_configured": true, 00:15:12.026 "data_offset": 2048, 00:15:12.026 "data_size": 63488 00:15:12.026 }, 00:15:12.026 { 00:15:12.026 "name": null, 00:15:12.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.026 "is_configured": false, 00:15:12.026 "data_offset": 0, 00:15:12.026 "data_size": 63488 00:15:12.026 }, 00:15:12.026 { 00:15:12.026 "name": null, 00:15:12.026 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.026 "is_configured": false, 00:15:12.026 "data_offset": 2048, 00:15:12.026 "data_size": 63488 00:15:12.026 } 00:15:12.026 ] 00:15:12.026 }' 00:15:12.026 09:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.026 09:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.589 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:12.589 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:12.589 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:12.589 09:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.589 09:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.589 [2024-10-15 09:14:56.261772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:12.589 [2024-10-15 09:14:56.262042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.589 [2024-10-15 09:14:56.262213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:12.589 [2024-10-15 09:14:56.262249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.589 [2024-10-15 09:14:56.262911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.589 [2024-10-15 09:14:56.262946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:12.589 [2024-10-15 09:14:56.263066] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:12.589 [2024-10-15 09:14:56.263109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:12.589 pt2 00:15:12.589 09:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.589 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:12.589 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:12.589 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:12.589 09:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.589 09:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.589 [2024-10-15 09:14:56.273819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:12.589 [2024-10-15 09:14:56.274049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.589 [2024-10-15 09:14:56.274222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:12.589 [2024-10-15 09:14:56.274361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.589 [2024-10-15 09:14:56.275164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.589 [2024-10-15 09:14:56.275221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:12.589 [2024-10-15 09:14:56.275337] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:12.589 [2024-10-15 09:14:56.275380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:12.589 [2024-10-15 09:14:56.275560] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:12.589 [2024-10-15 09:14:56.275583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:12.589 [2024-10-15 09:14:56.275922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:12.589 [2024-10-15 09:14:56.276141] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:12.589 [2024-10-15 09:14:56.276159] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:12.589 [2024-10-15 09:14:56.276341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.589 pt3 00:15:12.589 09:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.590 "name": "raid_bdev1", 00:15:12.590 "uuid": "4df39bba-4563-48a8-aa05-64dde6ea5dd1", 00:15:12.590 "strip_size_kb": 64, 00:15:12.590 "state": "online", 00:15:12.590 "raid_level": "concat", 00:15:12.590 "superblock": true, 00:15:12.590 "num_base_bdevs": 3, 00:15:12.590 "num_base_bdevs_discovered": 3, 00:15:12.590 "num_base_bdevs_operational": 3, 00:15:12.590 "base_bdevs_list": [ 00:15:12.590 { 00:15:12.590 "name": "pt1", 00:15:12.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:12.590 "is_configured": true, 00:15:12.590 "data_offset": 2048, 00:15:12.590 "data_size": 63488 00:15:12.590 }, 00:15:12.590 { 00:15:12.590 "name": "pt2", 00:15:12.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.590 "is_configured": true, 00:15:12.590 "data_offset": 2048, 00:15:12.590 "data_size": 63488 00:15:12.590 }, 00:15:12.590 { 00:15:12.590 "name": "pt3", 00:15:12.590 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.590 "is_configured": true, 00:15:12.590 "data_offset": 2048, 00:15:12.590 "data_size": 63488 00:15:12.590 } 00:15:12.590 ] 00:15:12.590 }' 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.590 09:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.156 [2024-10-15 09:14:56.802360] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:13.156 "name": "raid_bdev1", 00:15:13.156 "aliases": [ 00:15:13.156 "4df39bba-4563-48a8-aa05-64dde6ea5dd1" 00:15:13.156 ], 00:15:13.156 "product_name": "Raid Volume", 00:15:13.156 "block_size": 512, 00:15:13.156 "num_blocks": 190464, 00:15:13.156 "uuid": "4df39bba-4563-48a8-aa05-64dde6ea5dd1", 00:15:13.156 "assigned_rate_limits": { 00:15:13.156 "rw_ios_per_sec": 0, 00:15:13.156 "rw_mbytes_per_sec": 0, 00:15:13.156 "r_mbytes_per_sec": 0, 00:15:13.156 "w_mbytes_per_sec": 0 00:15:13.156 }, 00:15:13.156 "claimed": false, 00:15:13.156 "zoned": false, 00:15:13.156 "supported_io_types": { 00:15:13.156 "read": true, 00:15:13.156 "write": true, 00:15:13.156 "unmap": true, 00:15:13.156 "flush": true, 00:15:13.156 "reset": true, 00:15:13.156 "nvme_admin": false, 00:15:13.156 "nvme_io": false, 00:15:13.156 "nvme_io_md": false, 00:15:13.156 "write_zeroes": true, 00:15:13.156 "zcopy": false, 00:15:13.156 "get_zone_info": false, 00:15:13.156 "zone_management": false, 00:15:13.156 "zone_append": false, 00:15:13.156 "compare": false, 00:15:13.156 "compare_and_write": false, 00:15:13.156 "abort": false, 00:15:13.156 "seek_hole": false, 00:15:13.156 "seek_data": false, 00:15:13.156 "copy": false, 00:15:13.156 "nvme_iov_md": false 00:15:13.156 }, 00:15:13.156 "memory_domains": [ 00:15:13.156 { 00:15:13.156 "dma_device_id": "system", 00:15:13.156 "dma_device_type": 1 00:15:13.156 }, 00:15:13.156 { 00:15:13.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.156 "dma_device_type": 2 00:15:13.156 }, 00:15:13.156 { 00:15:13.156 "dma_device_id": "system", 00:15:13.156 "dma_device_type": 1 00:15:13.156 }, 00:15:13.156 { 00:15:13.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.156 "dma_device_type": 2 00:15:13.156 }, 00:15:13.156 { 00:15:13.156 "dma_device_id": "system", 00:15:13.156 "dma_device_type": 1 00:15:13.156 }, 00:15:13.156 { 00:15:13.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.156 "dma_device_type": 2 00:15:13.156 } 00:15:13.156 ], 00:15:13.156 "driver_specific": { 00:15:13.156 "raid": { 00:15:13.156 "uuid": "4df39bba-4563-48a8-aa05-64dde6ea5dd1", 00:15:13.156 "strip_size_kb": 64, 00:15:13.156 "state": "online", 00:15:13.156 "raid_level": "concat", 00:15:13.156 "superblock": true, 00:15:13.156 "num_base_bdevs": 3, 00:15:13.156 "num_base_bdevs_discovered": 3, 00:15:13.156 "num_base_bdevs_operational": 3, 00:15:13.156 "base_bdevs_list": [ 00:15:13.156 { 00:15:13.156 "name": "pt1", 00:15:13.156 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:13.156 "is_configured": true, 00:15:13.156 "data_offset": 2048, 00:15:13.156 "data_size": 63488 00:15:13.156 }, 00:15:13.156 { 00:15:13.156 "name": "pt2", 00:15:13.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.156 "is_configured": true, 00:15:13.156 "data_offset": 2048, 00:15:13.156 "data_size": 63488 00:15:13.156 }, 00:15:13.156 { 00:15:13.156 "name": "pt3", 00:15:13.156 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.156 "is_configured": true, 00:15:13.156 "data_offset": 2048, 00:15:13.156 "data_size": 63488 00:15:13.156 } 00:15:13.156 ] 00:15:13.156 } 00:15:13.156 } 00:15:13.156 }' 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:13.156 pt2 00:15:13.156 pt3' 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.156 09:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:13.156 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.156 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.156 09:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.156 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.156 09:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.156 09:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.156 09:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.156 09:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:13.156 09:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.156 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.156 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.156 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:13.415 [2024-10-15 09:14:57.114376] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4df39bba-4563-48a8-aa05-64dde6ea5dd1 '!=' 4df39bba-4563-48a8-aa05-64dde6ea5dd1 ']' 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67106 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 67106 ']' 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 67106 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67106 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67106' 00:15:13.415 killing process with pid 67106 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 67106 00:15:13.415 [2024-10-15 09:14:57.192790] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:13.415 09:14:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 67106 00:15:13.415 [2024-10-15 09:14:57.193091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.415 [2024-10-15 09:14:57.193223] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.415 [2024-10-15 09:14:57.193249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:13.673 [2024-10-15 09:14:57.477837] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:15.062 09:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:15.062 00:15:15.062 real 0m5.867s 00:15:15.062 user 0m8.813s 00:15:15.062 sys 0m0.858s 00:15:15.062 09:14:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:15.062 09:14:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.062 ************************************ 00:15:15.062 END TEST raid_superblock_test 00:15:15.062 ************************************ 00:15:15.062 09:14:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:15:15.062 09:14:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:15.062 09:14:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:15.062 09:14:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:15.062 ************************************ 00:15:15.062 START TEST raid_read_error_test 00:15:15.062 ************************************ 00:15:15.062 09:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:15:15.062 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:15.062 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:15.062 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:15.062 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:15.062 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:15.062 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:15.062 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zwaF67uFnY 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67366 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67366 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 67366 ']' 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:15.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:15.063 09:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.063 [2024-10-15 09:14:58.760101] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:15:15.063 [2024-10-15 09:14:58.760304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67366 ] 00:15:15.063 [2024-10-15 09:14:58.926184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.321 [2024-10-15 09:14:59.075089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.578 [2024-10-15 09:14:59.304545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:15.578 [2024-10-15 09:14:59.304653] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.187 BaseBdev1_malloc 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.187 true 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.187 [2024-10-15 09:14:59.849094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:16.187 [2024-10-15 09:14:59.849191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.187 [2024-10-15 09:14:59.849225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:16.187 [2024-10-15 09:14:59.849245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.187 [2024-10-15 09:14:59.852237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.187 [2024-10-15 09:14:59.852292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:16.187 BaseBdev1 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.187 BaseBdev2_malloc 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.187 true 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.187 [2024-10-15 09:14:59.921761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:16.187 [2024-10-15 09:14:59.921839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.187 [2024-10-15 09:14:59.921867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:16.187 [2024-10-15 09:14:59.921886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.187 [2024-10-15 09:14:59.924874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.187 [2024-10-15 09:14:59.924926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:16.187 BaseBdev2 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.187 BaseBdev3_malloc 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.187 true 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.187 09:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:16.187 09:15:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.187 09:15:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.187 [2024-10-15 09:15:00.006263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:16.187 [2024-10-15 09:15:00.006336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.187 [2024-10-15 09:15:00.006366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:16.187 [2024-10-15 09:15:00.006386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.187 [2024-10-15 09:15:00.009405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.187 [2024-10-15 09:15:00.009455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:16.187 BaseBdev3 00:15:16.187 09:15:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.187 09:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:16.187 09:15:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.187 09:15:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.187 [2024-10-15 09:15:00.018436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.187 [2024-10-15 09:15:00.021143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.187 [2024-10-15 09:15:00.021276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.187 [2024-10-15 09:15:00.021563] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:16.187 [2024-10-15 09:15:00.021596] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:16.187 [2024-10-15 09:15:00.021943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:16.187 [2024-10-15 09:15:00.022203] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:16.187 [2024-10-15 09:15:00.022239] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:16.187 [2024-10-15 09:15:00.022538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.187 09:15:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.187 09:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:16.187 09:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.187 09:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.187 09:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:16.187 09:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.187 09:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.187 09:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.187 09:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.187 09:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.187 09:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.188 09:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.188 09:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.188 09:15:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.188 09:15:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.188 09:15:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.188 09:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.188 "name": "raid_bdev1", 00:15:16.188 "uuid": "3dc7d35b-0352-4860-b613-ce08f0549517", 00:15:16.188 "strip_size_kb": 64, 00:15:16.188 "state": "online", 00:15:16.188 "raid_level": "concat", 00:15:16.188 "superblock": true, 00:15:16.188 "num_base_bdevs": 3, 00:15:16.188 "num_base_bdevs_discovered": 3, 00:15:16.188 "num_base_bdevs_operational": 3, 00:15:16.188 "base_bdevs_list": [ 00:15:16.188 { 00:15:16.188 "name": "BaseBdev1", 00:15:16.188 "uuid": "77809d46-38df-5df9-9ceb-a288384b2954", 00:15:16.188 "is_configured": true, 00:15:16.188 "data_offset": 2048, 00:15:16.188 "data_size": 63488 00:15:16.188 }, 00:15:16.188 { 00:15:16.188 "name": "BaseBdev2", 00:15:16.188 "uuid": "bb6f7eb2-7fa0-568c-b4c8-6cdcbe5f09db", 00:15:16.188 "is_configured": true, 00:15:16.188 "data_offset": 2048, 00:15:16.188 "data_size": 63488 00:15:16.188 }, 00:15:16.188 { 00:15:16.188 "name": "BaseBdev3", 00:15:16.188 "uuid": "2e205ada-722f-5ef3-9f34-836c1eb31d93", 00:15:16.188 "is_configured": true, 00:15:16.188 "data_offset": 2048, 00:15:16.188 "data_size": 63488 00:15:16.188 } 00:15:16.188 ] 00:15:16.188 }' 00:15:16.188 09:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.188 09:15:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.754 09:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:16.754 09:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:16.754 [2024-10-15 09:15:00.660147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.689 "name": "raid_bdev1", 00:15:17.689 "uuid": "3dc7d35b-0352-4860-b613-ce08f0549517", 00:15:17.689 "strip_size_kb": 64, 00:15:17.689 "state": "online", 00:15:17.689 "raid_level": "concat", 00:15:17.689 "superblock": true, 00:15:17.689 "num_base_bdevs": 3, 00:15:17.689 "num_base_bdevs_discovered": 3, 00:15:17.689 "num_base_bdevs_operational": 3, 00:15:17.689 "base_bdevs_list": [ 00:15:17.689 { 00:15:17.689 "name": "BaseBdev1", 00:15:17.689 "uuid": "77809d46-38df-5df9-9ceb-a288384b2954", 00:15:17.689 "is_configured": true, 00:15:17.689 "data_offset": 2048, 00:15:17.689 "data_size": 63488 00:15:17.689 }, 00:15:17.689 { 00:15:17.689 "name": "BaseBdev2", 00:15:17.689 "uuid": "bb6f7eb2-7fa0-568c-b4c8-6cdcbe5f09db", 00:15:17.689 "is_configured": true, 00:15:17.689 "data_offset": 2048, 00:15:17.689 "data_size": 63488 00:15:17.689 }, 00:15:17.689 { 00:15:17.689 "name": "BaseBdev3", 00:15:17.689 "uuid": "2e205ada-722f-5ef3-9f34-836c1eb31d93", 00:15:17.689 "is_configured": true, 00:15:17.689 "data_offset": 2048, 00:15:17.689 "data_size": 63488 00:15:17.689 } 00:15:17.689 ] 00:15:17.689 }' 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.689 09:15:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.256 09:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:18.256 09:15:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.256 09:15:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.256 [2024-10-15 09:15:02.061824] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:18.256 [2024-10-15 09:15:02.061872] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.256 [2024-10-15 09:15:02.065353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.256 [2024-10-15 09:15:02.065422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.256 [2024-10-15 09:15:02.065483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.256 [2024-10-15 09:15:02.065503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:18.256 { 00:15:18.256 "results": [ 00:15:18.256 { 00:15:18.256 "job": "raid_bdev1", 00:15:18.256 "core_mask": "0x1", 00:15:18.256 "workload": "randrw", 00:15:18.256 "percentage": 50, 00:15:18.256 "status": "finished", 00:15:18.256 "queue_depth": 1, 00:15:18.256 "io_size": 131072, 00:15:18.256 "runtime": 1.399105, 00:15:18.256 "iops": 9897.041322845676, 00:15:18.256 "mibps": 1237.1301653557096, 00:15:18.256 "io_failed": 1, 00:15:18.256 "io_timeout": 0, 00:15:18.256 "avg_latency_us": 142.2694186229715, 00:15:18.256 "min_latency_us": 39.79636363636364, 00:15:18.256 "max_latency_us": 1891.6072727272726 00:15:18.256 } 00:15:18.256 ], 00:15:18.256 "core_count": 1 00:15:18.256 } 00:15:18.256 09:15:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.256 09:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67366 00:15:18.256 09:15:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 67366 ']' 00:15:18.256 09:15:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 67366 00:15:18.256 09:15:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:15:18.256 09:15:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:18.256 09:15:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67366 00:15:18.256 09:15:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:18.256 09:15:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:18.256 killing process with pid 67366 00:15:18.256 09:15:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67366' 00:15:18.256 09:15:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 67366 00:15:18.256 [2024-10-15 09:15:02.102414] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:18.256 09:15:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 67366 00:15:18.515 [2024-10-15 09:15:02.319821] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:19.892 09:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zwaF67uFnY 00:15:19.892 09:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:19.892 09:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:19.892 09:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:15:19.892 09:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:15:19.892 09:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:19.892 09:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:19.892 09:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:15:19.892 00:15:19.892 real 0m4.847s 00:15:19.892 user 0m5.961s 00:15:19.892 sys 0m0.621s 00:15:19.892 09:15:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:19.892 09:15:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.892 ************************************ 00:15:19.892 END TEST raid_read_error_test 00:15:19.892 ************************************ 00:15:19.892 09:15:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:15:19.892 09:15:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:19.892 09:15:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:19.892 09:15:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:19.892 ************************************ 00:15:19.892 START TEST raid_write_error_test 00:15:19.892 ************************************ 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SvkCoHm9S1 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67506 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67506 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 67506 ']' 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:19.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:19.892 09:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.892 [2024-10-15 09:15:03.691185] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:15:19.892 [2024-10-15 09:15:03.691396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67506 ] 00:15:20.170 [2024-10-15 09:15:03.870275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.170 [2024-10-15 09:15:04.017011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.429 [2024-10-15 09:15:04.244616] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.429 [2024-10-15 09:15:04.244710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.996 BaseBdev1_malloc 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.996 true 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.996 [2024-10-15 09:15:04.747602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:20.996 [2024-10-15 09:15:04.747677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.996 [2024-10-15 09:15:04.747710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:20.996 [2024-10-15 09:15:04.747731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.996 [2024-10-15 09:15:04.750752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.996 [2024-10-15 09:15:04.750822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:20.996 BaseBdev1 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.996 BaseBdev2_malloc 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.996 true 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.996 [2024-10-15 09:15:04.817362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:20.996 [2024-10-15 09:15:04.817438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.996 [2024-10-15 09:15:04.817468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:20.996 [2024-10-15 09:15:04.817488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.996 [2024-10-15 09:15:04.820427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.996 [2024-10-15 09:15:04.820478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:20.996 BaseBdev2 00:15:20.996 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.997 BaseBdev3_malloc 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.997 true 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.997 [2024-10-15 09:15:04.895926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:20.997 [2024-10-15 09:15:04.895997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.997 [2024-10-15 09:15:04.896035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:20.997 [2024-10-15 09:15:04.896055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.997 [2024-10-15 09:15:04.899027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.997 [2024-10-15 09:15:04.899079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:20.997 BaseBdev3 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.997 [2024-10-15 09:15:04.908033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.997 [2024-10-15 09:15:04.910765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.997 [2024-10-15 09:15:04.910888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.997 [2024-10-15 09:15:04.911190] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:20.997 [2024-10-15 09:15:04.911223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:20.997 [2024-10-15 09:15:04.911552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:20.997 [2024-10-15 09:15:04.911784] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:20.997 [2024-10-15 09:15:04.911820] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:20.997 [2024-10-15 09:15:04.912046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.997 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.255 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.255 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.255 "name": "raid_bdev1", 00:15:21.255 "uuid": "522dcca3-7127-4283-843c-13485d52535a", 00:15:21.255 "strip_size_kb": 64, 00:15:21.255 "state": "online", 00:15:21.255 "raid_level": "concat", 00:15:21.255 "superblock": true, 00:15:21.255 "num_base_bdevs": 3, 00:15:21.255 "num_base_bdevs_discovered": 3, 00:15:21.255 "num_base_bdevs_operational": 3, 00:15:21.255 "base_bdevs_list": [ 00:15:21.255 { 00:15:21.255 "name": "BaseBdev1", 00:15:21.255 "uuid": "7a7976ab-9cc7-5327-ae86-50bf8f4b41de", 00:15:21.255 "is_configured": true, 00:15:21.255 "data_offset": 2048, 00:15:21.255 "data_size": 63488 00:15:21.255 }, 00:15:21.255 { 00:15:21.255 "name": "BaseBdev2", 00:15:21.255 "uuid": "28442cd0-6ee7-5e9a-bdc6-7a45a8bd9e75", 00:15:21.255 "is_configured": true, 00:15:21.255 "data_offset": 2048, 00:15:21.255 "data_size": 63488 00:15:21.255 }, 00:15:21.255 { 00:15:21.255 "name": "BaseBdev3", 00:15:21.255 "uuid": "ea842b66-38d3-5139-8d3b-acbf5a52e10a", 00:15:21.255 "is_configured": true, 00:15:21.255 "data_offset": 2048, 00:15:21.255 "data_size": 63488 00:15:21.255 } 00:15:21.255 ] 00:15:21.255 }' 00:15:21.255 09:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.255 09:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.514 09:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:21.514 09:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:21.772 [2024-10-15 09:15:05.521755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.708 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.708 "name": "raid_bdev1", 00:15:22.708 "uuid": "522dcca3-7127-4283-843c-13485d52535a", 00:15:22.708 "strip_size_kb": 64, 00:15:22.708 "state": "online", 00:15:22.709 "raid_level": "concat", 00:15:22.709 "superblock": true, 00:15:22.709 "num_base_bdevs": 3, 00:15:22.709 "num_base_bdevs_discovered": 3, 00:15:22.709 "num_base_bdevs_operational": 3, 00:15:22.709 "base_bdevs_list": [ 00:15:22.709 { 00:15:22.709 "name": "BaseBdev1", 00:15:22.709 "uuid": "7a7976ab-9cc7-5327-ae86-50bf8f4b41de", 00:15:22.709 "is_configured": true, 00:15:22.709 "data_offset": 2048, 00:15:22.709 "data_size": 63488 00:15:22.709 }, 00:15:22.709 { 00:15:22.709 "name": "BaseBdev2", 00:15:22.709 "uuid": "28442cd0-6ee7-5e9a-bdc6-7a45a8bd9e75", 00:15:22.709 "is_configured": true, 00:15:22.709 "data_offset": 2048, 00:15:22.709 "data_size": 63488 00:15:22.709 }, 00:15:22.709 { 00:15:22.709 "name": "BaseBdev3", 00:15:22.709 "uuid": "ea842b66-38d3-5139-8d3b-acbf5a52e10a", 00:15:22.709 "is_configured": true, 00:15:22.709 "data_offset": 2048, 00:15:22.709 "data_size": 63488 00:15:22.709 } 00:15:22.709 ] 00:15:22.709 }' 00:15:22.709 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.709 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.274 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:23.274 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.274 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.274 [2024-10-15 09:15:06.948154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:23.274 [2024-10-15 09:15:06.948216] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:23.274 [2024-10-15 09:15:06.951731] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.274 [2024-10-15 09:15:06.951796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.274 [2024-10-15 09:15:06.951853] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.274 [2024-10-15 09:15:06.951868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:23.274 { 00:15:23.274 "results": [ 00:15:23.274 { 00:15:23.274 "job": "raid_bdev1", 00:15:23.274 "core_mask": "0x1", 00:15:23.274 "workload": "randrw", 00:15:23.274 "percentage": 50, 00:15:23.274 "status": "finished", 00:15:23.274 "queue_depth": 1, 00:15:23.274 "io_size": 131072, 00:15:23.274 "runtime": 1.423558, 00:15:23.274 "iops": 9904.759763915485, 00:15:23.274 "mibps": 1238.0949704894356, 00:15:23.274 "io_failed": 1, 00:15:23.274 "io_timeout": 0, 00:15:23.274 "avg_latency_us": 141.42749385923628, 00:15:23.274 "min_latency_us": 41.192727272727275, 00:15:23.274 "max_latency_us": 1951.1854545454546 00:15:23.274 } 00:15:23.274 ], 00:15:23.274 "core_count": 1 00:15:23.274 } 00:15:23.274 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.274 09:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67506 00:15:23.274 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 67506 ']' 00:15:23.274 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 67506 00:15:23.274 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:15:23.274 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:23.274 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67506 00:15:23.274 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:23.274 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:23.274 killing process with pid 67506 00:15:23.275 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67506' 00:15:23.275 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 67506 00:15:23.275 [2024-10-15 09:15:06.988432] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:23.275 09:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 67506 00:15:23.535 [2024-10-15 09:15:07.209985] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:24.916 09:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SvkCoHm9S1 00:15:24.916 09:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:24.916 09:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:24.916 09:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:15:24.916 09:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:15:24.916 09:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:24.916 09:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:24.916 09:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:15:24.916 00:15:24.916 real 0m4.859s 00:15:24.916 user 0m5.880s 00:15:24.916 sys 0m0.661s 00:15:24.916 ************************************ 00:15:24.916 END TEST raid_write_error_test 00:15:24.916 ************************************ 00:15:24.916 09:15:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:24.916 09:15:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.916 09:15:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:24.916 09:15:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:15:24.916 09:15:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:24.916 09:15:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:24.916 09:15:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:24.916 ************************************ 00:15:24.916 START TEST raid_state_function_test 00:15:24.916 ************************************ 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:24.916 Process raid pid: 67655 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67655 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67655' 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67655 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 67655 ']' 00:15:24.916 09:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.917 09:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.917 09:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.917 09:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.917 09:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.917 [2024-10-15 09:15:08.585900] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:15:24.917 [2024-10-15 09:15:08.586146] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.917 [2024-10-15 09:15:08.760527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.175 [2024-10-15 09:15:08.909744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.434 [2024-10-15 09:15:09.141655] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.434 [2024-10-15 09:15:09.141723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.692 [2024-10-15 09:15:09.543441] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:25.692 [2024-10-15 09:15:09.543658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:25.692 [2024-10-15 09:15:09.543689] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.692 [2024-10-15 09:15:09.543709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.692 [2024-10-15 09:15:09.543719] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:25.692 [2024-10-15 09:15:09.543735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.692 "name": "Existed_Raid", 00:15:25.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.692 "strip_size_kb": 0, 00:15:25.692 "state": "configuring", 00:15:25.692 "raid_level": "raid1", 00:15:25.692 "superblock": false, 00:15:25.692 "num_base_bdevs": 3, 00:15:25.692 "num_base_bdevs_discovered": 0, 00:15:25.692 "num_base_bdevs_operational": 3, 00:15:25.692 "base_bdevs_list": [ 00:15:25.692 { 00:15:25.692 "name": "BaseBdev1", 00:15:25.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.692 "is_configured": false, 00:15:25.692 "data_offset": 0, 00:15:25.692 "data_size": 0 00:15:25.692 }, 00:15:25.692 { 00:15:25.692 "name": "BaseBdev2", 00:15:25.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.692 "is_configured": false, 00:15:25.692 "data_offset": 0, 00:15:25.692 "data_size": 0 00:15:25.692 }, 00:15:25.692 { 00:15:25.692 "name": "BaseBdev3", 00:15:25.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.692 "is_configured": false, 00:15:25.692 "data_offset": 0, 00:15:25.692 "data_size": 0 00:15:25.692 } 00:15:25.692 ] 00:15:25.692 }' 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.692 09:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.259 [2024-10-15 09:15:10.035556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:26.259 [2024-10-15 09:15:10.035607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.259 [2024-10-15 09:15:10.043548] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:26.259 [2024-10-15 09:15:10.043609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:26.259 [2024-10-15 09:15:10.043626] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:26.259 [2024-10-15 09:15:10.043642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:26.259 [2024-10-15 09:15:10.043652] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:26.259 [2024-10-15 09:15:10.043667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.259 [2024-10-15 09:15:10.092191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.259 BaseBdev1 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:26.259 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.260 [ 00:15:26.260 { 00:15:26.260 "name": "BaseBdev1", 00:15:26.260 "aliases": [ 00:15:26.260 "39379eb7-ef39-4acb-b9ef-e6a16cd07cc7" 00:15:26.260 ], 00:15:26.260 "product_name": "Malloc disk", 00:15:26.260 "block_size": 512, 00:15:26.260 "num_blocks": 65536, 00:15:26.260 "uuid": "39379eb7-ef39-4acb-b9ef-e6a16cd07cc7", 00:15:26.260 "assigned_rate_limits": { 00:15:26.260 "rw_ios_per_sec": 0, 00:15:26.260 "rw_mbytes_per_sec": 0, 00:15:26.260 "r_mbytes_per_sec": 0, 00:15:26.260 "w_mbytes_per_sec": 0 00:15:26.260 }, 00:15:26.260 "claimed": true, 00:15:26.260 "claim_type": "exclusive_write", 00:15:26.260 "zoned": false, 00:15:26.260 "supported_io_types": { 00:15:26.260 "read": true, 00:15:26.260 "write": true, 00:15:26.260 "unmap": true, 00:15:26.260 "flush": true, 00:15:26.260 "reset": true, 00:15:26.260 "nvme_admin": false, 00:15:26.260 "nvme_io": false, 00:15:26.260 "nvme_io_md": false, 00:15:26.260 "write_zeroes": true, 00:15:26.260 "zcopy": true, 00:15:26.260 "get_zone_info": false, 00:15:26.260 "zone_management": false, 00:15:26.260 "zone_append": false, 00:15:26.260 "compare": false, 00:15:26.260 "compare_and_write": false, 00:15:26.260 "abort": true, 00:15:26.260 "seek_hole": false, 00:15:26.260 "seek_data": false, 00:15:26.260 "copy": true, 00:15:26.260 "nvme_iov_md": false 00:15:26.260 }, 00:15:26.260 "memory_domains": [ 00:15:26.260 { 00:15:26.260 "dma_device_id": "system", 00:15:26.260 "dma_device_type": 1 00:15:26.260 }, 00:15:26.260 { 00:15:26.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.260 "dma_device_type": 2 00:15:26.260 } 00:15:26.260 ], 00:15:26.260 "driver_specific": {} 00:15:26.260 } 00:15:26.260 ] 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.260 "name": "Existed_Raid", 00:15:26.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.260 "strip_size_kb": 0, 00:15:26.260 "state": "configuring", 00:15:26.260 "raid_level": "raid1", 00:15:26.260 "superblock": false, 00:15:26.260 "num_base_bdevs": 3, 00:15:26.260 "num_base_bdevs_discovered": 1, 00:15:26.260 "num_base_bdevs_operational": 3, 00:15:26.260 "base_bdevs_list": [ 00:15:26.260 { 00:15:26.260 "name": "BaseBdev1", 00:15:26.260 "uuid": "39379eb7-ef39-4acb-b9ef-e6a16cd07cc7", 00:15:26.260 "is_configured": true, 00:15:26.260 "data_offset": 0, 00:15:26.260 "data_size": 65536 00:15:26.260 }, 00:15:26.260 { 00:15:26.260 "name": "BaseBdev2", 00:15:26.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.260 "is_configured": false, 00:15:26.260 "data_offset": 0, 00:15:26.260 "data_size": 0 00:15:26.260 }, 00:15:26.260 { 00:15:26.260 "name": "BaseBdev3", 00:15:26.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.260 "is_configured": false, 00:15:26.260 "data_offset": 0, 00:15:26.260 "data_size": 0 00:15:26.260 } 00:15:26.260 ] 00:15:26.260 }' 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.260 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.827 [2024-10-15 09:15:10.632423] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:26.827 [2024-10-15 09:15:10.632504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.827 [2024-10-15 09:15:10.644502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.827 [2024-10-15 09:15:10.647313] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:26.827 [2024-10-15 09:15:10.647493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:26.827 [2024-10-15 09:15:10.647641] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:26.827 [2024-10-15 09:15:10.647707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.827 "name": "Existed_Raid", 00:15:26.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.827 "strip_size_kb": 0, 00:15:26.827 "state": "configuring", 00:15:26.827 "raid_level": "raid1", 00:15:26.827 "superblock": false, 00:15:26.827 "num_base_bdevs": 3, 00:15:26.827 "num_base_bdevs_discovered": 1, 00:15:26.827 "num_base_bdevs_operational": 3, 00:15:26.827 "base_bdevs_list": [ 00:15:26.827 { 00:15:26.827 "name": "BaseBdev1", 00:15:26.827 "uuid": "39379eb7-ef39-4acb-b9ef-e6a16cd07cc7", 00:15:26.827 "is_configured": true, 00:15:26.827 "data_offset": 0, 00:15:26.827 "data_size": 65536 00:15:26.827 }, 00:15:26.827 { 00:15:26.827 "name": "BaseBdev2", 00:15:26.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.827 "is_configured": false, 00:15:26.827 "data_offset": 0, 00:15:26.827 "data_size": 0 00:15:26.827 }, 00:15:26.827 { 00:15:26.827 "name": "BaseBdev3", 00:15:26.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.827 "is_configured": false, 00:15:26.827 "data_offset": 0, 00:15:26.827 "data_size": 0 00:15:26.827 } 00:15:26.827 ] 00:15:26.827 }' 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.827 09:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.394 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:27.394 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.394 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.394 [2024-10-15 09:15:11.204110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.394 BaseBdev2 00:15:27.394 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.394 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.395 [ 00:15:27.395 { 00:15:27.395 "name": "BaseBdev2", 00:15:27.395 "aliases": [ 00:15:27.395 "6b15fce1-b281-45be-93ee-d111a265a861" 00:15:27.395 ], 00:15:27.395 "product_name": "Malloc disk", 00:15:27.395 "block_size": 512, 00:15:27.395 "num_blocks": 65536, 00:15:27.395 "uuid": "6b15fce1-b281-45be-93ee-d111a265a861", 00:15:27.395 "assigned_rate_limits": { 00:15:27.395 "rw_ios_per_sec": 0, 00:15:27.395 "rw_mbytes_per_sec": 0, 00:15:27.395 "r_mbytes_per_sec": 0, 00:15:27.395 "w_mbytes_per_sec": 0 00:15:27.395 }, 00:15:27.395 "claimed": true, 00:15:27.395 "claim_type": "exclusive_write", 00:15:27.395 "zoned": false, 00:15:27.395 "supported_io_types": { 00:15:27.395 "read": true, 00:15:27.395 "write": true, 00:15:27.395 "unmap": true, 00:15:27.395 "flush": true, 00:15:27.395 "reset": true, 00:15:27.395 "nvme_admin": false, 00:15:27.395 "nvme_io": false, 00:15:27.395 "nvme_io_md": false, 00:15:27.395 "write_zeroes": true, 00:15:27.395 "zcopy": true, 00:15:27.395 "get_zone_info": false, 00:15:27.395 "zone_management": false, 00:15:27.395 "zone_append": false, 00:15:27.395 "compare": false, 00:15:27.395 "compare_and_write": false, 00:15:27.395 "abort": true, 00:15:27.395 "seek_hole": false, 00:15:27.395 "seek_data": false, 00:15:27.395 "copy": true, 00:15:27.395 "nvme_iov_md": false 00:15:27.395 }, 00:15:27.395 "memory_domains": [ 00:15:27.395 { 00:15:27.395 "dma_device_id": "system", 00:15:27.395 "dma_device_type": 1 00:15:27.395 }, 00:15:27.395 { 00:15:27.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.395 "dma_device_type": 2 00:15:27.395 } 00:15:27.395 ], 00:15:27.395 "driver_specific": {} 00:15:27.395 } 00:15:27.395 ] 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.395 "name": "Existed_Raid", 00:15:27.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.395 "strip_size_kb": 0, 00:15:27.395 "state": "configuring", 00:15:27.395 "raid_level": "raid1", 00:15:27.395 "superblock": false, 00:15:27.395 "num_base_bdevs": 3, 00:15:27.395 "num_base_bdevs_discovered": 2, 00:15:27.395 "num_base_bdevs_operational": 3, 00:15:27.395 "base_bdevs_list": [ 00:15:27.395 { 00:15:27.395 "name": "BaseBdev1", 00:15:27.395 "uuid": "39379eb7-ef39-4acb-b9ef-e6a16cd07cc7", 00:15:27.395 "is_configured": true, 00:15:27.395 "data_offset": 0, 00:15:27.395 "data_size": 65536 00:15:27.395 }, 00:15:27.395 { 00:15:27.395 "name": "BaseBdev2", 00:15:27.395 "uuid": "6b15fce1-b281-45be-93ee-d111a265a861", 00:15:27.395 "is_configured": true, 00:15:27.395 "data_offset": 0, 00:15:27.395 "data_size": 65536 00:15:27.395 }, 00:15:27.395 { 00:15:27.395 "name": "BaseBdev3", 00:15:27.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.395 "is_configured": false, 00:15:27.395 "data_offset": 0, 00:15:27.395 "data_size": 0 00:15:27.395 } 00:15:27.395 ] 00:15:27.395 }' 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.395 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.960 [2024-10-15 09:15:11.818928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.960 [2024-10-15 09:15:11.819283] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:27.960 [2024-10-15 09:15:11.819319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:27.960 [2024-10-15 09:15:11.819693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:27.960 [2024-10-15 09:15:11.819956] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:27.960 [2024-10-15 09:15:11.819975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:27.960 [2024-10-15 09:15:11.820369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.960 BaseBdev3 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.960 [ 00:15:27.960 { 00:15:27.960 "name": "BaseBdev3", 00:15:27.960 "aliases": [ 00:15:27.960 "f35601ae-0f44-4cce-8b7f-42c588b851a8" 00:15:27.960 ], 00:15:27.960 "product_name": "Malloc disk", 00:15:27.960 "block_size": 512, 00:15:27.960 "num_blocks": 65536, 00:15:27.960 "uuid": "f35601ae-0f44-4cce-8b7f-42c588b851a8", 00:15:27.960 "assigned_rate_limits": { 00:15:27.960 "rw_ios_per_sec": 0, 00:15:27.960 "rw_mbytes_per_sec": 0, 00:15:27.960 "r_mbytes_per_sec": 0, 00:15:27.960 "w_mbytes_per_sec": 0 00:15:27.960 }, 00:15:27.960 "claimed": true, 00:15:27.960 "claim_type": "exclusive_write", 00:15:27.960 "zoned": false, 00:15:27.960 "supported_io_types": { 00:15:27.960 "read": true, 00:15:27.960 "write": true, 00:15:27.960 "unmap": true, 00:15:27.960 "flush": true, 00:15:27.960 "reset": true, 00:15:27.960 "nvme_admin": false, 00:15:27.960 "nvme_io": false, 00:15:27.960 "nvme_io_md": false, 00:15:27.960 "write_zeroes": true, 00:15:27.960 "zcopy": true, 00:15:27.960 "get_zone_info": false, 00:15:27.960 "zone_management": false, 00:15:27.960 "zone_append": false, 00:15:27.960 "compare": false, 00:15:27.960 "compare_and_write": false, 00:15:27.960 "abort": true, 00:15:27.960 "seek_hole": false, 00:15:27.960 "seek_data": false, 00:15:27.960 "copy": true, 00:15:27.960 "nvme_iov_md": false 00:15:27.960 }, 00:15:27.960 "memory_domains": [ 00:15:27.960 { 00:15:27.960 "dma_device_id": "system", 00:15:27.960 "dma_device_type": 1 00:15:27.960 }, 00:15:27.960 { 00:15:27.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.960 "dma_device_type": 2 00:15:27.960 } 00:15:27.960 ], 00:15:27.960 "driver_specific": {} 00:15:27.960 } 00:15:27.960 ] 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.960 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.218 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.218 "name": "Existed_Raid", 00:15:28.218 "uuid": "9f2998de-73ab-4e7f-959f-fe25bf1519ef", 00:15:28.218 "strip_size_kb": 0, 00:15:28.218 "state": "online", 00:15:28.218 "raid_level": "raid1", 00:15:28.218 "superblock": false, 00:15:28.218 "num_base_bdevs": 3, 00:15:28.218 "num_base_bdevs_discovered": 3, 00:15:28.218 "num_base_bdevs_operational": 3, 00:15:28.218 "base_bdevs_list": [ 00:15:28.218 { 00:15:28.218 "name": "BaseBdev1", 00:15:28.218 "uuid": "39379eb7-ef39-4acb-b9ef-e6a16cd07cc7", 00:15:28.218 "is_configured": true, 00:15:28.218 "data_offset": 0, 00:15:28.218 "data_size": 65536 00:15:28.218 }, 00:15:28.218 { 00:15:28.218 "name": "BaseBdev2", 00:15:28.218 "uuid": "6b15fce1-b281-45be-93ee-d111a265a861", 00:15:28.218 "is_configured": true, 00:15:28.218 "data_offset": 0, 00:15:28.218 "data_size": 65536 00:15:28.218 }, 00:15:28.218 { 00:15:28.218 "name": "BaseBdev3", 00:15:28.218 "uuid": "f35601ae-0f44-4cce-8b7f-42c588b851a8", 00:15:28.218 "is_configured": true, 00:15:28.218 "data_offset": 0, 00:15:28.218 "data_size": 65536 00:15:28.218 } 00:15:28.218 ] 00:15:28.218 }' 00:15:28.218 09:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.218 09:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.476 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:28.476 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:28.476 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:28.476 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:28.476 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:28.476 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:28.476 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:28.476 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.476 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.476 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:28.476 [2024-10-15 09:15:12.323561] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.476 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.476 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:28.476 "name": "Existed_Raid", 00:15:28.476 "aliases": [ 00:15:28.476 "9f2998de-73ab-4e7f-959f-fe25bf1519ef" 00:15:28.476 ], 00:15:28.476 "product_name": "Raid Volume", 00:15:28.476 "block_size": 512, 00:15:28.476 "num_blocks": 65536, 00:15:28.476 "uuid": "9f2998de-73ab-4e7f-959f-fe25bf1519ef", 00:15:28.476 "assigned_rate_limits": { 00:15:28.476 "rw_ios_per_sec": 0, 00:15:28.476 "rw_mbytes_per_sec": 0, 00:15:28.476 "r_mbytes_per_sec": 0, 00:15:28.476 "w_mbytes_per_sec": 0 00:15:28.476 }, 00:15:28.476 "claimed": false, 00:15:28.476 "zoned": false, 00:15:28.476 "supported_io_types": { 00:15:28.476 "read": true, 00:15:28.476 "write": true, 00:15:28.476 "unmap": false, 00:15:28.476 "flush": false, 00:15:28.476 "reset": true, 00:15:28.476 "nvme_admin": false, 00:15:28.476 "nvme_io": false, 00:15:28.476 "nvme_io_md": false, 00:15:28.476 "write_zeroes": true, 00:15:28.476 "zcopy": false, 00:15:28.476 "get_zone_info": false, 00:15:28.476 "zone_management": false, 00:15:28.476 "zone_append": false, 00:15:28.476 "compare": false, 00:15:28.476 "compare_and_write": false, 00:15:28.476 "abort": false, 00:15:28.476 "seek_hole": false, 00:15:28.476 "seek_data": false, 00:15:28.476 "copy": false, 00:15:28.476 "nvme_iov_md": false 00:15:28.476 }, 00:15:28.476 "memory_domains": [ 00:15:28.476 { 00:15:28.476 "dma_device_id": "system", 00:15:28.476 "dma_device_type": 1 00:15:28.476 }, 00:15:28.476 { 00:15:28.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.476 "dma_device_type": 2 00:15:28.476 }, 00:15:28.476 { 00:15:28.476 "dma_device_id": "system", 00:15:28.476 "dma_device_type": 1 00:15:28.476 }, 00:15:28.476 { 00:15:28.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.476 "dma_device_type": 2 00:15:28.476 }, 00:15:28.476 { 00:15:28.476 "dma_device_id": "system", 00:15:28.476 "dma_device_type": 1 00:15:28.476 }, 00:15:28.476 { 00:15:28.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.476 "dma_device_type": 2 00:15:28.476 } 00:15:28.476 ], 00:15:28.476 "driver_specific": { 00:15:28.476 "raid": { 00:15:28.476 "uuid": "9f2998de-73ab-4e7f-959f-fe25bf1519ef", 00:15:28.476 "strip_size_kb": 0, 00:15:28.476 "state": "online", 00:15:28.476 "raid_level": "raid1", 00:15:28.476 "superblock": false, 00:15:28.476 "num_base_bdevs": 3, 00:15:28.476 "num_base_bdevs_discovered": 3, 00:15:28.476 "num_base_bdevs_operational": 3, 00:15:28.476 "base_bdevs_list": [ 00:15:28.476 { 00:15:28.476 "name": "BaseBdev1", 00:15:28.476 "uuid": "39379eb7-ef39-4acb-b9ef-e6a16cd07cc7", 00:15:28.476 "is_configured": true, 00:15:28.476 "data_offset": 0, 00:15:28.476 "data_size": 65536 00:15:28.476 }, 00:15:28.476 { 00:15:28.476 "name": "BaseBdev2", 00:15:28.476 "uuid": "6b15fce1-b281-45be-93ee-d111a265a861", 00:15:28.476 "is_configured": true, 00:15:28.476 "data_offset": 0, 00:15:28.476 "data_size": 65536 00:15:28.476 }, 00:15:28.476 { 00:15:28.476 "name": "BaseBdev3", 00:15:28.476 "uuid": "f35601ae-0f44-4cce-8b7f-42c588b851a8", 00:15:28.476 "is_configured": true, 00:15:28.476 "data_offset": 0, 00:15:28.476 "data_size": 65536 00:15:28.476 } 00:15:28.476 ] 00:15:28.476 } 00:15:28.476 } 00:15:28.476 }' 00:15:28.476 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:28.735 BaseBdev2 00:15:28.735 BaseBdev3' 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.735 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.735 [2024-10-15 09:15:12.631343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.993 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.993 "name": "Existed_Raid", 00:15:28.993 "uuid": "9f2998de-73ab-4e7f-959f-fe25bf1519ef", 00:15:28.993 "strip_size_kb": 0, 00:15:28.994 "state": "online", 00:15:28.994 "raid_level": "raid1", 00:15:28.994 "superblock": false, 00:15:28.994 "num_base_bdevs": 3, 00:15:28.994 "num_base_bdevs_discovered": 2, 00:15:28.994 "num_base_bdevs_operational": 2, 00:15:28.994 "base_bdevs_list": [ 00:15:28.994 { 00:15:28.994 "name": null, 00:15:28.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.994 "is_configured": false, 00:15:28.994 "data_offset": 0, 00:15:28.994 "data_size": 65536 00:15:28.994 }, 00:15:28.994 { 00:15:28.994 "name": "BaseBdev2", 00:15:28.994 "uuid": "6b15fce1-b281-45be-93ee-d111a265a861", 00:15:28.994 "is_configured": true, 00:15:28.994 "data_offset": 0, 00:15:28.994 "data_size": 65536 00:15:28.994 }, 00:15:28.994 { 00:15:28.994 "name": "BaseBdev3", 00:15:28.994 "uuid": "f35601ae-0f44-4cce-8b7f-42c588b851a8", 00:15:28.994 "is_configured": true, 00:15:28.994 "data_offset": 0, 00:15:28.994 "data_size": 65536 00:15:28.994 } 00:15:28.994 ] 00:15:28.994 }' 00:15:28.994 09:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.994 09:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.560 [2024-10-15 09:15:13.284938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.560 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.560 [2024-10-15 09:15:13.438311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:29.560 [2024-10-15 09:15:13.438508] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.820 [2024-10-15 09:15:13.527986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.820 [2024-10-15 09:15:13.528062] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.820 [2024-10-15 09:15:13.528084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.820 BaseBdev2 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.820 [ 00:15:29.820 { 00:15:29.820 "name": "BaseBdev2", 00:15:29.820 "aliases": [ 00:15:29.820 "09b2d607-7f87-4698-b0a2-04897efc5427" 00:15:29.820 ], 00:15:29.820 "product_name": "Malloc disk", 00:15:29.820 "block_size": 512, 00:15:29.820 "num_blocks": 65536, 00:15:29.820 "uuid": "09b2d607-7f87-4698-b0a2-04897efc5427", 00:15:29.820 "assigned_rate_limits": { 00:15:29.820 "rw_ios_per_sec": 0, 00:15:29.820 "rw_mbytes_per_sec": 0, 00:15:29.820 "r_mbytes_per_sec": 0, 00:15:29.820 "w_mbytes_per_sec": 0 00:15:29.820 }, 00:15:29.820 "claimed": false, 00:15:29.820 "zoned": false, 00:15:29.820 "supported_io_types": { 00:15:29.820 "read": true, 00:15:29.820 "write": true, 00:15:29.820 "unmap": true, 00:15:29.820 "flush": true, 00:15:29.820 "reset": true, 00:15:29.820 "nvme_admin": false, 00:15:29.820 "nvme_io": false, 00:15:29.820 "nvme_io_md": false, 00:15:29.820 "write_zeroes": true, 00:15:29.820 "zcopy": true, 00:15:29.820 "get_zone_info": false, 00:15:29.820 "zone_management": false, 00:15:29.820 "zone_append": false, 00:15:29.820 "compare": false, 00:15:29.820 "compare_and_write": false, 00:15:29.820 "abort": true, 00:15:29.820 "seek_hole": false, 00:15:29.820 "seek_data": false, 00:15:29.820 "copy": true, 00:15:29.820 "nvme_iov_md": false 00:15:29.820 }, 00:15:29.820 "memory_domains": [ 00:15:29.820 { 00:15:29.820 "dma_device_id": "system", 00:15:29.820 "dma_device_type": 1 00:15:29.820 }, 00:15:29.820 { 00:15:29.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.820 "dma_device_type": 2 00:15:29.820 } 00:15:29.820 ], 00:15:29.820 "driver_specific": {} 00:15:29.820 } 00:15:29.820 ] 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.820 BaseBdev3 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.820 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.079 [ 00:15:30.079 { 00:15:30.079 "name": "BaseBdev3", 00:15:30.079 "aliases": [ 00:15:30.079 "883115a4-008e-47cc-b2cb-7b653f589f19" 00:15:30.079 ], 00:15:30.079 "product_name": "Malloc disk", 00:15:30.079 "block_size": 512, 00:15:30.079 "num_blocks": 65536, 00:15:30.079 "uuid": "883115a4-008e-47cc-b2cb-7b653f589f19", 00:15:30.079 "assigned_rate_limits": { 00:15:30.079 "rw_ios_per_sec": 0, 00:15:30.079 "rw_mbytes_per_sec": 0, 00:15:30.079 "r_mbytes_per_sec": 0, 00:15:30.079 "w_mbytes_per_sec": 0 00:15:30.079 }, 00:15:30.079 "claimed": false, 00:15:30.079 "zoned": false, 00:15:30.079 "supported_io_types": { 00:15:30.079 "read": true, 00:15:30.079 "write": true, 00:15:30.079 "unmap": true, 00:15:30.079 "flush": true, 00:15:30.079 "reset": true, 00:15:30.079 "nvme_admin": false, 00:15:30.079 "nvme_io": false, 00:15:30.079 "nvme_io_md": false, 00:15:30.079 "write_zeroes": true, 00:15:30.079 "zcopy": true, 00:15:30.079 "get_zone_info": false, 00:15:30.079 "zone_management": false, 00:15:30.079 "zone_append": false, 00:15:30.079 "compare": false, 00:15:30.079 "compare_and_write": false, 00:15:30.079 "abort": true, 00:15:30.079 "seek_hole": false, 00:15:30.079 "seek_data": false, 00:15:30.079 "copy": true, 00:15:30.079 "nvme_iov_md": false 00:15:30.079 }, 00:15:30.079 "memory_domains": [ 00:15:30.079 { 00:15:30.079 "dma_device_id": "system", 00:15:30.079 "dma_device_type": 1 00:15:30.079 }, 00:15:30.079 { 00:15:30.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.079 "dma_device_type": 2 00:15:30.079 } 00:15:30.079 ], 00:15:30.079 "driver_specific": {} 00:15:30.079 } 00:15:30.079 ] 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.079 [2024-10-15 09:15:13.784058] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:30.079 [2024-10-15 09:15:13.784149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:30.079 [2024-10-15 09:15:13.784197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.079 [2024-10-15 09:15:13.786854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.079 "name": "Existed_Raid", 00:15:30.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.079 "strip_size_kb": 0, 00:15:30.079 "state": "configuring", 00:15:30.079 "raid_level": "raid1", 00:15:30.079 "superblock": false, 00:15:30.079 "num_base_bdevs": 3, 00:15:30.079 "num_base_bdevs_discovered": 2, 00:15:30.079 "num_base_bdevs_operational": 3, 00:15:30.079 "base_bdevs_list": [ 00:15:30.079 { 00:15:30.079 "name": "BaseBdev1", 00:15:30.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.079 "is_configured": false, 00:15:30.079 "data_offset": 0, 00:15:30.079 "data_size": 0 00:15:30.079 }, 00:15:30.079 { 00:15:30.079 "name": "BaseBdev2", 00:15:30.079 "uuid": "09b2d607-7f87-4698-b0a2-04897efc5427", 00:15:30.079 "is_configured": true, 00:15:30.079 "data_offset": 0, 00:15:30.079 "data_size": 65536 00:15:30.079 }, 00:15:30.079 { 00:15:30.079 "name": "BaseBdev3", 00:15:30.079 "uuid": "883115a4-008e-47cc-b2cb-7b653f589f19", 00:15:30.079 "is_configured": true, 00:15:30.079 "data_offset": 0, 00:15:30.079 "data_size": 65536 00:15:30.079 } 00:15:30.079 ] 00:15:30.079 }' 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.079 09:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.647 [2024-10-15 09:15:14.328278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.647 "name": "Existed_Raid", 00:15:30.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.647 "strip_size_kb": 0, 00:15:30.647 "state": "configuring", 00:15:30.647 "raid_level": "raid1", 00:15:30.647 "superblock": false, 00:15:30.647 "num_base_bdevs": 3, 00:15:30.647 "num_base_bdevs_discovered": 1, 00:15:30.647 "num_base_bdevs_operational": 3, 00:15:30.647 "base_bdevs_list": [ 00:15:30.647 { 00:15:30.647 "name": "BaseBdev1", 00:15:30.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.647 "is_configured": false, 00:15:30.647 "data_offset": 0, 00:15:30.647 "data_size": 0 00:15:30.647 }, 00:15:30.647 { 00:15:30.647 "name": null, 00:15:30.647 "uuid": "09b2d607-7f87-4698-b0a2-04897efc5427", 00:15:30.647 "is_configured": false, 00:15:30.647 "data_offset": 0, 00:15:30.647 "data_size": 65536 00:15:30.647 }, 00:15:30.647 { 00:15:30.647 "name": "BaseBdev3", 00:15:30.647 "uuid": "883115a4-008e-47cc-b2cb-7b653f589f19", 00:15:30.647 "is_configured": true, 00:15:30.647 "data_offset": 0, 00:15:30.647 "data_size": 65536 00:15:30.647 } 00:15:30.647 ] 00:15:30.647 }' 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.647 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.261 [2024-10-15 09:15:14.938559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.261 BaseBdev1 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.261 [ 00:15:31.261 { 00:15:31.261 "name": "BaseBdev1", 00:15:31.261 "aliases": [ 00:15:31.261 "42445b7f-6fb8-4050-9e03-9b6e515d28d4" 00:15:31.261 ], 00:15:31.261 "product_name": "Malloc disk", 00:15:31.261 "block_size": 512, 00:15:31.261 "num_blocks": 65536, 00:15:31.261 "uuid": "42445b7f-6fb8-4050-9e03-9b6e515d28d4", 00:15:31.261 "assigned_rate_limits": { 00:15:31.261 "rw_ios_per_sec": 0, 00:15:31.261 "rw_mbytes_per_sec": 0, 00:15:31.261 "r_mbytes_per_sec": 0, 00:15:31.261 "w_mbytes_per_sec": 0 00:15:31.261 }, 00:15:31.261 "claimed": true, 00:15:31.261 "claim_type": "exclusive_write", 00:15:31.261 "zoned": false, 00:15:31.261 "supported_io_types": { 00:15:31.261 "read": true, 00:15:31.261 "write": true, 00:15:31.261 "unmap": true, 00:15:31.261 "flush": true, 00:15:31.261 "reset": true, 00:15:31.261 "nvme_admin": false, 00:15:31.261 "nvme_io": false, 00:15:31.261 "nvme_io_md": false, 00:15:31.261 "write_zeroes": true, 00:15:31.261 "zcopy": true, 00:15:31.261 "get_zone_info": false, 00:15:31.261 "zone_management": false, 00:15:31.261 "zone_append": false, 00:15:31.261 "compare": false, 00:15:31.261 "compare_and_write": false, 00:15:31.261 "abort": true, 00:15:31.261 "seek_hole": false, 00:15:31.261 "seek_data": false, 00:15:31.261 "copy": true, 00:15:31.261 "nvme_iov_md": false 00:15:31.261 }, 00:15:31.261 "memory_domains": [ 00:15:31.261 { 00:15:31.261 "dma_device_id": "system", 00:15:31.261 "dma_device_type": 1 00:15:31.261 }, 00:15:31.261 { 00:15:31.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.261 "dma_device_type": 2 00:15:31.261 } 00:15:31.261 ], 00:15:31.261 "driver_specific": {} 00:15:31.261 } 00:15:31.261 ] 00:15:31.261 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.262 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:31.262 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:31.262 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.262 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.262 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.262 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.262 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.262 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.262 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.262 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.262 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.262 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.262 09:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.262 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.262 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.262 09:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.262 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.262 "name": "Existed_Raid", 00:15:31.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.262 "strip_size_kb": 0, 00:15:31.262 "state": "configuring", 00:15:31.262 "raid_level": "raid1", 00:15:31.262 "superblock": false, 00:15:31.262 "num_base_bdevs": 3, 00:15:31.262 "num_base_bdevs_discovered": 2, 00:15:31.262 "num_base_bdevs_operational": 3, 00:15:31.262 "base_bdevs_list": [ 00:15:31.262 { 00:15:31.262 "name": "BaseBdev1", 00:15:31.262 "uuid": "42445b7f-6fb8-4050-9e03-9b6e515d28d4", 00:15:31.262 "is_configured": true, 00:15:31.262 "data_offset": 0, 00:15:31.262 "data_size": 65536 00:15:31.262 }, 00:15:31.262 { 00:15:31.262 "name": null, 00:15:31.262 "uuid": "09b2d607-7f87-4698-b0a2-04897efc5427", 00:15:31.262 "is_configured": false, 00:15:31.262 "data_offset": 0, 00:15:31.262 "data_size": 65536 00:15:31.262 }, 00:15:31.262 { 00:15:31.262 "name": "BaseBdev3", 00:15:31.262 "uuid": "883115a4-008e-47cc-b2cb-7b653f589f19", 00:15:31.262 "is_configured": true, 00:15:31.262 "data_offset": 0, 00:15:31.262 "data_size": 65536 00:15:31.262 } 00:15:31.262 ] 00:15:31.262 }' 00:15:31.262 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.262 09:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.826 [2024-10-15 09:15:15.510794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.826 "name": "Existed_Raid", 00:15:31.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.826 "strip_size_kb": 0, 00:15:31.826 "state": "configuring", 00:15:31.826 "raid_level": "raid1", 00:15:31.826 "superblock": false, 00:15:31.826 "num_base_bdevs": 3, 00:15:31.826 "num_base_bdevs_discovered": 1, 00:15:31.826 "num_base_bdevs_operational": 3, 00:15:31.826 "base_bdevs_list": [ 00:15:31.826 { 00:15:31.826 "name": "BaseBdev1", 00:15:31.826 "uuid": "42445b7f-6fb8-4050-9e03-9b6e515d28d4", 00:15:31.826 "is_configured": true, 00:15:31.826 "data_offset": 0, 00:15:31.826 "data_size": 65536 00:15:31.826 }, 00:15:31.826 { 00:15:31.826 "name": null, 00:15:31.826 "uuid": "09b2d607-7f87-4698-b0a2-04897efc5427", 00:15:31.826 "is_configured": false, 00:15:31.826 "data_offset": 0, 00:15:31.826 "data_size": 65536 00:15:31.826 }, 00:15:31.826 { 00:15:31.826 "name": null, 00:15:31.826 "uuid": "883115a4-008e-47cc-b2cb-7b653f589f19", 00:15:31.826 "is_configured": false, 00:15:31.826 "data_offset": 0, 00:15:31.826 "data_size": 65536 00:15:31.826 } 00:15:31.826 ] 00:15:31.826 }' 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.826 09:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.084 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:32.084 09:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.084 09:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.084 09:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.342 [2024-10-15 09:15:16.067031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.342 "name": "Existed_Raid", 00:15:32.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.342 "strip_size_kb": 0, 00:15:32.342 "state": "configuring", 00:15:32.342 "raid_level": "raid1", 00:15:32.342 "superblock": false, 00:15:32.342 "num_base_bdevs": 3, 00:15:32.342 "num_base_bdevs_discovered": 2, 00:15:32.342 "num_base_bdevs_operational": 3, 00:15:32.342 "base_bdevs_list": [ 00:15:32.342 { 00:15:32.342 "name": "BaseBdev1", 00:15:32.342 "uuid": "42445b7f-6fb8-4050-9e03-9b6e515d28d4", 00:15:32.342 "is_configured": true, 00:15:32.342 "data_offset": 0, 00:15:32.342 "data_size": 65536 00:15:32.342 }, 00:15:32.342 { 00:15:32.342 "name": null, 00:15:32.342 "uuid": "09b2d607-7f87-4698-b0a2-04897efc5427", 00:15:32.342 "is_configured": false, 00:15:32.342 "data_offset": 0, 00:15:32.342 "data_size": 65536 00:15:32.342 }, 00:15:32.342 { 00:15:32.342 "name": "BaseBdev3", 00:15:32.342 "uuid": "883115a4-008e-47cc-b2cb-7b653f589f19", 00:15:32.342 "is_configured": true, 00:15:32.342 "data_offset": 0, 00:15:32.342 "data_size": 65536 00:15:32.342 } 00:15:32.342 ] 00:15:32.342 }' 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.342 09:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.908 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.908 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:32.908 09:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.908 09:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.908 09:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.908 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:32.908 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:32.908 09:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.908 09:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.908 [2024-10-15 09:15:16.639184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:32.908 09:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.909 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:32.909 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.909 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.909 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.909 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.909 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.909 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.909 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.909 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.909 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.909 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.909 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.909 09:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.909 09:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.909 09:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.909 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.909 "name": "Existed_Raid", 00:15:32.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.909 "strip_size_kb": 0, 00:15:32.909 "state": "configuring", 00:15:32.909 "raid_level": "raid1", 00:15:32.909 "superblock": false, 00:15:32.909 "num_base_bdevs": 3, 00:15:32.909 "num_base_bdevs_discovered": 1, 00:15:32.909 "num_base_bdevs_operational": 3, 00:15:32.909 "base_bdevs_list": [ 00:15:32.909 { 00:15:32.909 "name": null, 00:15:32.909 "uuid": "42445b7f-6fb8-4050-9e03-9b6e515d28d4", 00:15:32.909 "is_configured": false, 00:15:32.909 "data_offset": 0, 00:15:32.909 "data_size": 65536 00:15:32.909 }, 00:15:32.909 { 00:15:32.909 "name": null, 00:15:32.909 "uuid": "09b2d607-7f87-4698-b0a2-04897efc5427", 00:15:32.909 "is_configured": false, 00:15:32.909 "data_offset": 0, 00:15:32.909 "data_size": 65536 00:15:32.909 }, 00:15:32.909 { 00:15:32.909 "name": "BaseBdev3", 00:15:32.909 "uuid": "883115a4-008e-47cc-b2cb-7b653f589f19", 00:15:32.909 "is_configured": true, 00:15:32.909 "data_offset": 0, 00:15:32.909 "data_size": 65536 00:15:32.909 } 00:15:32.909 ] 00:15:32.909 }' 00:15:32.909 09:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.909 09:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.475 [2024-10-15 09:15:17.293264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.475 "name": "Existed_Raid", 00:15:33.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.475 "strip_size_kb": 0, 00:15:33.475 "state": "configuring", 00:15:33.475 "raid_level": "raid1", 00:15:33.475 "superblock": false, 00:15:33.475 "num_base_bdevs": 3, 00:15:33.475 "num_base_bdevs_discovered": 2, 00:15:33.475 "num_base_bdevs_operational": 3, 00:15:33.475 "base_bdevs_list": [ 00:15:33.475 { 00:15:33.475 "name": null, 00:15:33.475 "uuid": "42445b7f-6fb8-4050-9e03-9b6e515d28d4", 00:15:33.475 "is_configured": false, 00:15:33.475 "data_offset": 0, 00:15:33.475 "data_size": 65536 00:15:33.475 }, 00:15:33.475 { 00:15:33.475 "name": "BaseBdev2", 00:15:33.475 "uuid": "09b2d607-7f87-4698-b0a2-04897efc5427", 00:15:33.475 "is_configured": true, 00:15:33.475 "data_offset": 0, 00:15:33.475 "data_size": 65536 00:15:33.475 }, 00:15:33.475 { 00:15:33.475 "name": "BaseBdev3", 00:15:33.475 "uuid": "883115a4-008e-47cc-b2cb-7b653f589f19", 00:15:33.475 "is_configured": true, 00:15:33.475 "data_offset": 0, 00:15:33.475 "data_size": 65536 00:15:33.475 } 00:15:33.475 ] 00:15:33.475 }' 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.475 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.041 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.041 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.041 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.041 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:34.041 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.041 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:34.041 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.041 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:34.041 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.041 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.041 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.041 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 42445b7f-6fb8-4050-9e03-9b6e515d28d4 00:15:34.041 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.041 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.041 [2024-10-15 09:15:17.954964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:34.041 [2024-10-15 09:15:17.955045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:34.041 [2024-10-15 09:15:17.955059] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:34.041 [2024-10-15 09:15:17.955436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:34.042 [2024-10-15 09:15:17.955659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:34.042 [2024-10-15 09:15:17.955683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:34.042 [2024-10-15 09:15:17.956027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.042 NewBaseBdev 00:15:34.042 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.042 09:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:34.042 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:34.042 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:34.042 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:34.042 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:34.042 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:34.042 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:34.042 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.042 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.299 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.299 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:34.299 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.299 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.299 [ 00:15:34.299 { 00:15:34.299 "name": "NewBaseBdev", 00:15:34.299 "aliases": [ 00:15:34.299 "42445b7f-6fb8-4050-9e03-9b6e515d28d4" 00:15:34.299 ], 00:15:34.299 "product_name": "Malloc disk", 00:15:34.299 "block_size": 512, 00:15:34.299 "num_blocks": 65536, 00:15:34.299 "uuid": "42445b7f-6fb8-4050-9e03-9b6e515d28d4", 00:15:34.299 "assigned_rate_limits": { 00:15:34.299 "rw_ios_per_sec": 0, 00:15:34.299 "rw_mbytes_per_sec": 0, 00:15:34.299 "r_mbytes_per_sec": 0, 00:15:34.299 "w_mbytes_per_sec": 0 00:15:34.299 }, 00:15:34.299 "claimed": true, 00:15:34.299 "claim_type": "exclusive_write", 00:15:34.299 "zoned": false, 00:15:34.299 "supported_io_types": { 00:15:34.299 "read": true, 00:15:34.299 "write": true, 00:15:34.299 "unmap": true, 00:15:34.299 "flush": true, 00:15:34.299 "reset": true, 00:15:34.299 "nvme_admin": false, 00:15:34.299 "nvme_io": false, 00:15:34.299 "nvme_io_md": false, 00:15:34.299 "write_zeroes": true, 00:15:34.299 "zcopy": true, 00:15:34.299 "get_zone_info": false, 00:15:34.299 "zone_management": false, 00:15:34.299 "zone_append": false, 00:15:34.299 "compare": false, 00:15:34.299 "compare_and_write": false, 00:15:34.299 "abort": true, 00:15:34.299 "seek_hole": false, 00:15:34.299 "seek_data": false, 00:15:34.299 "copy": true, 00:15:34.299 "nvme_iov_md": false 00:15:34.299 }, 00:15:34.299 "memory_domains": [ 00:15:34.299 { 00:15:34.299 "dma_device_id": "system", 00:15:34.299 "dma_device_type": 1 00:15:34.299 }, 00:15:34.299 { 00:15:34.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.299 "dma_device_type": 2 00:15:34.299 } 00:15:34.299 ], 00:15:34.299 "driver_specific": {} 00:15:34.299 } 00:15:34.299 ] 00:15:34.299 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.299 09:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:34.299 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:34.299 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.299 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.299 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.299 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.299 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.299 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.299 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.299 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.299 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.299 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.299 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.299 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.299 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.299 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.299 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.299 "name": "Existed_Raid", 00:15:34.299 "uuid": "d8c7a73c-2bf0-4d83-a272-345c79dac761", 00:15:34.299 "strip_size_kb": 0, 00:15:34.299 "state": "online", 00:15:34.299 "raid_level": "raid1", 00:15:34.299 "superblock": false, 00:15:34.299 "num_base_bdevs": 3, 00:15:34.299 "num_base_bdevs_discovered": 3, 00:15:34.299 "num_base_bdevs_operational": 3, 00:15:34.299 "base_bdevs_list": [ 00:15:34.299 { 00:15:34.299 "name": "NewBaseBdev", 00:15:34.299 "uuid": "42445b7f-6fb8-4050-9e03-9b6e515d28d4", 00:15:34.299 "is_configured": true, 00:15:34.299 "data_offset": 0, 00:15:34.299 "data_size": 65536 00:15:34.299 }, 00:15:34.299 { 00:15:34.299 "name": "BaseBdev2", 00:15:34.299 "uuid": "09b2d607-7f87-4698-b0a2-04897efc5427", 00:15:34.299 "is_configured": true, 00:15:34.299 "data_offset": 0, 00:15:34.299 "data_size": 65536 00:15:34.299 }, 00:15:34.299 { 00:15:34.299 "name": "BaseBdev3", 00:15:34.299 "uuid": "883115a4-008e-47cc-b2cb-7b653f589f19", 00:15:34.299 "is_configured": true, 00:15:34.299 "data_offset": 0, 00:15:34.299 "data_size": 65536 00:15:34.299 } 00:15:34.299 ] 00:15:34.299 }' 00:15:34.299 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.299 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.865 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:34.865 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:34.865 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:34.865 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:34.865 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:34.865 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:34.865 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:34.865 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.865 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:34.865 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.865 [2024-10-15 09:15:18.519612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.865 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.865 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:34.865 "name": "Existed_Raid", 00:15:34.865 "aliases": [ 00:15:34.865 "d8c7a73c-2bf0-4d83-a272-345c79dac761" 00:15:34.865 ], 00:15:34.865 "product_name": "Raid Volume", 00:15:34.865 "block_size": 512, 00:15:34.865 "num_blocks": 65536, 00:15:34.865 "uuid": "d8c7a73c-2bf0-4d83-a272-345c79dac761", 00:15:34.865 "assigned_rate_limits": { 00:15:34.865 "rw_ios_per_sec": 0, 00:15:34.865 "rw_mbytes_per_sec": 0, 00:15:34.865 "r_mbytes_per_sec": 0, 00:15:34.865 "w_mbytes_per_sec": 0 00:15:34.865 }, 00:15:34.865 "claimed": false, 00:15:34.865 "zoned": false, 00:15:34.865 "supported_io_types": { 00:15:34.865 "read": true, 00:15:34.865 "write": true, 00:15:34.865 "unmap": false, 00:15:34.865 "flush": false, 00:15:34.865 "reset": true, 00:15:34.865 "nvme_admin": false, 00:15:34.865 "nvme_io": false, 00:15:34.865 "nvme_io_md": false, 00:15:34.865 "write_zeroes": true, 00:15:34.865 "zcopy": false, 00:15:34.865 "get_zone_info": false, 00:15:34.865 "zone_management": false, 00:15:34.865 "zone_append": false, 00:15:34.865 "compare": false, 00:15:34.865 "compare_and_write": false, 00:15:34.865 "abort": false, 00:15:34.865 "seek_hole": false, 00:15:34.865 "seek_data": false, 00:15:34.865 "copy": false, 00:15:34.865 "nvme_iov_md": false 00:15:34.865 }, 00:15:34.865 "memory_domains": [ 00:15:34.865 { 00:15:34.865 "dma_device_id": "system", 00:15:34.865 "dma_device_type": 1 00:15:34.865 }, 00:15:34.865 { 00:15:34.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.865 "dma_device_type": 2 00:15:34.865 }, 00:15:34.865 { 00:15:34.865 "dma_device_id": "system", 00:15:34.865 "dma_device_type": 1 00:15:34.865 }, 00:15:34.865 { 00:15:34.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.865 "dma_device_type": 2 00:15:34.865 }, 00:15:34.865 { 00:15:34.865 "dma_device_id": "system", 00:15:34.865 "dma_device_type": 1 00:15:34.865 }, 00:15:34.865 { 00:15:34.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.865 "dma_device_type": 2 00:15:34.865 } 00:15:34.865 ], 00:15:34.865 "driver_specific": { 00:15:34.865 "raid": { 00:15:34.865 "uuid": "d8c7a73c-2bf0-4d83-a272-345c79dac761", 00:15:34.865 "strip_size_kb": 0, 00:15:34.865 "state": "online", 00:15:34.865 "raid_level": "raid1", 00:15:34.865 "superblock": false, 00:15:34.865 "num_base_bdevs": 3, 00:15:34.865 "num_base_bdevs_discovered": 3, 00:15:34.865 "num_base_bdevs_operational": 3, 00:15:34.865 "base_bdevs_list": [ 00:15:34.865 { 00:15:34.865 "name": "NewBaseBdev", 00:15:34.865 "uuid": "42445b7f-6fb8-4050-9e03-9b6e515d28d4", 00:15:34.865 "is_configured": true, 00:15:34.865 "data_offset": 0, 00:15:34.865 "data_size": 65536 00:15:34.865 }, 00:15:34.865 { 00:15:34.865 "name": "BaseBdev2", 00:15:34.865 "uuid": "09b2d607-7f87-4698-b0a2-04897efc5427", 00:15:34.865 "is_configured": true, 00:15:34.865 "data_offset": 0, 00:15:34.865 "data_size": 65536 00:15:34.865 }, 00:15:34.865 { 00:15:34.865 "name": "BaseBdev3", 00:15:34.865 "uuid": "883115a4-008e-47cc-b2cb-7b653f589f19", 00:15:34.865 "is_configured": true, 00:15:34.865 "data_offset": 0, 00:15:34.865 "data_size": 65536 00:15:34.865 } 00:15:34.865 ] 00:15:34.865 } 00:15:34.865 } 00:15:34.865 }' 00:15:34.865 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.865 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:34.865 BaseBdev2 00:15:34.865 BaseBdev3' 00:15:34.865 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.865 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.866 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.124 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.124 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:35.124 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:35.124 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:35.124 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.124 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.124 [2024-10-15 09:15:18.839291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:35.124 [2024-10-15 09:15:18.839343] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.124 [2024-10-15 09:15:18.839463] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.124 [2024-10-15 09:15:18.839864] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.124 [2024-10-15 09:15:18.839883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:35.124 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.124 09:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67655 00:15:35.124 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 67655 ']' 00:15:35.124 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 67655 00:15:35.124 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:35.124 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:35.124 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67655 00:15:35.124 killing process with pid 67655 00:15:35.124 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:35.124 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:35.124 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67655' 00:15:35.124 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 67655 00:15:35.124 [2024-10-15 09:15:18.878092] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:35.124 09:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 67655 00:15:35.383 [2024-10-15 09:15:19.170978] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:36.757 ************************************ 00:15:36.757 END TEST raid_state_function_test 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:36.757 00:15:36.757 real 0m11.833s 00:15:36.757 user 0m19.398s 00:15:36.757 sys 0m1.653s 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.757 ************************************ 00:15:36.757 09:15:20 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:15:36.757 09:15:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:36.757 09:15:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:36.757 09:15:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:36.757 ************************************ 00:15:36.757 START TEST raid_state_function_test_sb 00:15:36.757 ************************************ 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:36.757 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:36.758 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:36.758 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:36.758 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:36.758 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:36.758 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:36.758 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68294 00:15:36.758 Process raid pid: 68294 00:15:36.758 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68294' 00:15:36.758 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:36.758 09:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68294 00:15:36.758 09:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 68294 ']' 00:15:36.758 09:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.758 09:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:36.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.758 09:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.758 09:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:36.758 09:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.758 [2024-10-15 09:15:20.458387] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:15:36.758 [2024-10-15 09:15:20.458560] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.758 [2024-10-15 09:15:20.628833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.017 [2024-10-15 09:15:20.779095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.275 [2024-10-15 09:15:21.009764] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.275 [2024-10-15 09:15:21.009828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.842 [2024-10-15 09:15:21.521083] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.842 [2024-10-15 09:15:21.521200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.842 [2024-10-15 09:15:21.521218] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.842 [2024-10-15 09:15:21.521235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.842 [2024-10-15 09:15:21.521246] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:37.842 [2024-10-15 09:15:21.521262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.842 "name": "Existed_Raid", 00:15:37.842 "uuid": "a0f73196-f6f4-4137-9559-000d8d247de6", 00:15:37.842 "strip_size_kb": 0, 00:15:37.842 "state": "configuring", 00:15:37.842 "raid_level": "raid1", 00:15:37.842 "superblock": true, 00:15:37.842 "num_base_bdevs": 3, 00:15:37.842 "num_base_bdevs_discovered": 0, 00:15:37.842 "num_base_bdevs_operational": 3, 00:15:37.842 "base_bdevs_list": [ 00:15:37.842 { 00:15:37.842 "name": "BaseBdev1", 00:15:37.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.842 "is_configured": false, 00:15:37.842 "data_offset": 0, 00:15:37.842 "data_size": 0 00:15:37.842 }, 00:15:37.842 { 00:15:37.842 "name": "BaseBdev2", 00:15:37.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.842 "is_configured": false, 00:15:37.842 "data_offset": 0, 00:15:37.842 "data_size": 0 00:15:37.842 }, 00:15:37.842 { 00:15:37.842 "name": "BaseBdev3", 00:15:37.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.842 "is_configured": false, 00:15:37.842 "data_offset": 0, 00:15:37.842 "data_size": 0 00:15:37.842 } 00:15:37.842 ] 00:15:37.842 }' 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.842 09:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.100 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:38.101 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.101 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.101 [2024-10-15 09:15:22.009145] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:38.101 [2024-10-15 09:15:22.009217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:38.101 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.101 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:38.101 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.101 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.101 [2024-10-15 09:15:22.021256] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:38.101 [2024-10-15 09:15:22.021324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:38.101 [2024-10-15 09:15:22.021340] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:38.101 [2024-10-15 09:15:22.021357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:38.101 [2024-10-15 09:15:22.021367] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:38.101 [2024-10-15 09:15:22.021382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:38.101 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.101 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:38.101 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.101 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.359 [2024-10-15 09:15:22.072773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.359 BaseBdev1 00:15:38.359 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.359 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:38.359 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:38.359 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:38.359 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:38.359 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:38.359 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:38.359 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:38.359 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.359 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.359 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.359 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:38.359 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.359 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.359 [ 00:15:38.359 { 00:15:38.359 "name": "BaseBdev1", 00:15:38.359 "aliases": [ 00:15:38.359 "c3be3efa-c6bc-4b34-8480-134dbda57704" 00:15:38.359 ], 00:15:38.359 "product_name": "Malloc disk", 00:15:38.359 "block_size": 512, 00:15:38.359 "num_blocks": 65536, 00:15:38.359 "uuid": "c3be3efa-c6bc-4b34-8480-134dbda57704", 00:15:38.359 "assigned_rate_limits": { 00:15:38.359 "rw_ios_per_sec": 0, 00:15:38.359 "rw_mbytes_per_sec": 0, 00:15:38.359 "r_mbytes_per_sec": 0, 00:15:38.359 "w_mbytes_per_sec": 0 00:15:38.359 }, 00:15:38.359 "claimed": true, 00:15:38.359 "claim_type": "exclusive_write", 00:15:38.360 "zoned": false, 00:15:38.360 "supported_io_types": { 00:15:38.360 "read": true, 00:15:38.360 "write": true, 00:15:38.360 "unmap": true, 00:15:38.360 "flush": true, 00:15:38.360 "reset": true, 00:15:38.360 "nvme_admin": false, 00:15:38.360 "nvme_io": false, 00:15:38.360 "nvme_io_md": false, 00:15:38.360 "write_zeroes": true, 00:15:38.360 "zcopy": true, 00:15:38.360 "get_zone_info": false, 00:15:38.360 "zone_management": false, 00:15:38.360 "zone_append": false, 00:15:38.360 "compare": false, 00:15:38.360 "compare_and_write": false, 00:15:38.360 "abort": true, 00:15:38.360 "seek_hole": false, 00:15:38.360 "seek_data": false, 00:15:38.360 "copy": true, 00:15:38.360 "nvme_iov_md": false 00:15:38.360 }, 00:15:38.360 "memory_domains": [ 00:15:38.360 { 00:15:38.360 "dma_device_id": "system", 00:15:38.360 "dma_device_type": 1 00:15:38.360 }, 00:15:38.360 { 00:15:38.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.360 "dma_device_type": 2 00:15:38.360 } 00:15:38.360 ], 00:15:38.360 "driver_specific": {} 00:15:38.360 } 00:15:38.360 ] 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.360 "name": "Existed_Raid", 00:15:38.360 "uuid": "bde0b204-6d4d-4ef4-be63-0847c1026b7d", 00:15:38.360 "strip_size_kb": 0, 00:15:38.360 "state": "configuring", 00:15:38.360 "raid_level": "raid1", 00:15:38.360 "superblock": true, 00:15:38.360 "num_base_bdevs": 3, 00:15:38.360 "num_base_bdevs_discovered": 1, 00:15:38.360 "num_base_bdevs_operational": 3, 00:15:38.360 "base_bdevs_list": [ 00:15:38.360 { 00:15:38.360 "name": "BaseBdev1", 00:15:38.360 "uuid": "c3be3efa-c6bc-4b34-8480-134dbda57704", 00:15:38.360 "is_configured": true, 00:15:38.360 "data_offset": 2048, 00:15:38.360 "data_size": 63488 00:15:38.360 }, 00:15:38.360 { 00:15:38.360 "name": "BaseBdev2", 00:15:38.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.360 "is_configured": false, 00:15:38.360 "data_offset": 0, 00:15:38.360 "data_size": 0 00:15:38.360 }, 00:15:38.360 { 00:15:38.360 "name": "BaseBdev3", 00:15:38.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.360 "is_configured": false, 00:15:38.360 "data_offset": 0, 00:15:38.360 "data_size": 0 00:15:38.360 } 00:15:38.360 ] 00:15:38.360 }' 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.360 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.928 [2024-10-15 09:15:22.633087] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:38.928 [2024-10-15 09:15:22.633193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.928 [2024-10-15 09:15:22.645154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.928 [2024-10-15 09:15:22.648030] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:38.928 [2024-10-15 09:15:22.648087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:38.928 [2024-10-15 09:15:22.648103] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:38.928 [2024-10-15 09:15:22.648133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.928 "name": "Existed_Raid", 00:15:38.928 "uuid": "c208bdeb-ab52-47c4-b72e-f1eec191c659", 00:15:38.928 "strip_size_kb": 0, 00:15:38.928 "state": "configuring", 00:15:38.928 "raid_level": "raid1", 00:15:38.928 "superblock": true, 00:15:38.928 "num_base_bdevs": 3, 00:15:38.928 "num_base_bdevs_discovered": 1, 00:15:38.928 "num_base_bdevs_operational": 3, 00:15:38.928 "base_bdevs_list": [ 00:15:38.928 { 00:15:38.928 "name": "BaseBdev1", 00:15:38.928 "uuid": "c3be3efa-c6bc-4b34-8480-134dbda57704", 00:15:38.928 "is_configured": true, 00:15:38.928 "data_offset": 2048, 00:15:38.928 "data_size": 63488 00:15:38.928 }, 00:15:38.928 { 00:15:38.928 "name": "BaseBdev2", 00:15:38.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.928 "is_configured": false, 00:15:38.928 "data_offset": 0, 00:15:38.928 "data_size": 0 00:15:38.928 }, 00:15:38.928 { 00:15:38.928 "name": "BaseBdev3", 00:15:38.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.928 "is_configured": false, 00:15:38.928 "data_offset": 0, 00:15:38.928 "data_size": 0 00:15:38.928 } 00:15:38.928 ] 00:15:38.928 }' 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.928 09:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.496 [2024-10-15 09:15:23.187082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:39.496 BaseBdev2 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.496 [ 00:15:39.496 { 00:15:39.496 "name": "BaseBdev2", 00:15:39.496 "aliases": [ 00:15:39.496 "981a5901-f5b6-488c-a875-239185823f7c" 00:15:39.496 ], 00:15:39.496 "product_name": "Malloc disk", 00:15:39.496 "block_size": 512, 00:15:39.496 "num_blocks": 65536, 00:15:39.496 "uuid": "981a5901-f5b6-488c-a875-239185823f7c", 00:15:39.496 "assigned_rate_limits": { 00:15:39.496 "rw_ios_per_sec": 0, 00:15:39.496 "rw_mbytes_per_sec": 0, 00:15:39.496 "r_mbytes_per_sec": 0, 00:15:39.496 "w_mbytes_per_sec": 0 00:15:39.496 }, 00:15:39.496 "claimed": true, 00:15:39.496 "claim_type": "exclusive_write", 00:15:39.496 "zoned": false, 00:15:39.496 "supported_io_types": { 00:15:39.496 "read": true, 00:15:39.496 "write": true, 00:15:39.496 "unmap": true, 00:15:39.496 "flush": true, 00:15:39.496 "reset": true, 00:15:39.496 "nvme_admin": false, 00:15:39.496 "nvme_io": false, 00:15:39.496 "nvme_io_md": false, 00:15:39.496 "write_zeroes": true, 00:15:39.496 "zcopy": true, 00:15:39.496 "get_zone_info": false, 00:15:39.496 "zone_management": false, 00:15:39.496 "zone_append": false, 00:15:39.496 "compare": false, 00:15:39.496 "compare_and_write": false, 00:15:39.496 "abort": true, 00:15:39.496 "seek_hole": false, 00:15:39.496 "seek_data": false, 00:15:39.496 "copy": true, 00:15:39.496 "nvme_iov_md": false 00:15:39.496 }, 00:15:39.496 "memory_domains": [ 00:15:39.496 { 00:15:39.496 "dma_device_id": "system", 00:15:39.496 "dma_device_type": 1 00:15:39.496 }, 00:15:39.496 { 00:15:39.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.496 "dma_device_type": 2 00:15:39.496 } 00:15:39.496 ], 00:15:39.496 "driver_specific": {} 00:15:39.496 } 00:15:39.496 ] 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.496 "name": "Existed_Raid", 00:15:39.496 "uuid": "c208bdeb-ab52-47c4-b72e-f1eec191c659", 00:15:39.496 "strip_size_kb": 0, 00:15:39.496 "state": "configuring", 00:15:39.496 "raid_level": "raid1", 00:15:39.496 "superblock": true, 00:15:39.496 "num_base_bdevs": 3, 00:15:39.496 "num_base_bdevs_discovered": 2, 00:15:39.496 "num_base_bdevs_operational": 3, 00:15:39.496 "base_bdevs_list": [ 00:15:39.496 { 00:15:39.496 "name": "BaseBdev1", 00:15:39.496 "uuid": "c3be3efa-c6bc-4b34-8480-134dbda57704", 00:15:39.496 "is_configured": true, 00:15:39.496 "data_offset": 2048, 00:15:39.496 "data_size": 63488 00:15:39.496 }, 00:15:39.496 { 00:15:39.496 "name": "BaseBdev2", 00:15:39.496 "uuid": "981a5901-f5b6-488c-a875-239185823f7c", 00:15:39.496 "is_configured": true, 00:15:39.496 "data_offset": 2048, 00:15:39.496 "data_size": 63488 00:15:39.496 }, 00:15:39.496 { 00:15:39.496 "name": "BaseBdev3", 00:15:39.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.496 "is_configured": false, 00:15:39.496 "data_offset": 0, 00:15:39.496 "data_size": 0 00:15:39.496 } 00:15:39.496 ] 00:15:39.496 }' 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.496 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.063 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:40.063 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.063 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.063 [2024-10-15 09:15:23.814931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.063 [2024-10-15 09:15:23.815360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:40.063 [2024-10-15 09:15:23.815392] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:40.063 [2024-10-15 09:15:23.815926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:40.063 BaseBdev3 00:15:40.063 [2024-10-15 09:15:23.816191] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:40.063 [2024-10-15 09:15:23.816210] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:40.063 [2024-10-15 09:15:23.816400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.063 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.063 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:40.063 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:40.063 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:40.063 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:40.063 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:40.063 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:40.063 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:40.063 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.063 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.063 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.063 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:40.063 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.063 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.063 [ 00:15:40.063 { 00:15:40.063 "name": "BaseBdev3", 00:15:40.063 "aliases": [ 00:15:40.063 "df3e5d7a-77d3-42f6-a68d-4bbf2caaac57" 00:15:40.063 ], 00:15:40.063 "product_name": "Malloc disk", 00:15:40.063 "block_size": 512, 00:15:40.063 "num_blocks": 65536, 00:15:40.063 "uuid": "df3e5d7a-77d3-42f6-a68d-4bbf2caaac57", 00:15:40.063 "assigned_rate_limits": { 00:15:40.063 "rw_ios_per_sec": 0, 00:15:40.063 "rw_mbytes_per_sec": 0, 00:15:40.063 "r_mbytes_per_sec": 0, 00:15:40.063 "w_mbytes_per_sec": 0 00:15:40.063 }, 00:15:40.063 "claimed": true, 00:15:40.063 "claim_type": "exclusive_write", 00:15:40.063 "zoned": false, 00:15:40.063 "supported_io_types": { 00:15:40.064 "read": true, 00:15:40.064 "write": true, 00:15:40.064 "unmap": true, 00:15:40.064 "flush": true, 00:15:40.064 "reset": true, 00:15:40.064 "nvme_admin": false, 00:15:40.064 "nvme_io": false, 00:15:40.064 "nvme_io_md": false, 00:15:40.064 "write_zeroes": true, 00:15:40.064 "zcopy": true, 00:15:40.064 "get_zone_info": false, 00:15:40.064 "zone_management": false, 00:15:40.064 "zone_append": false, 00:15:40.064 "compare": false, 00:15:40.064 "compare_and_write": false, 00:15:40.064 "abort": true, 00:15:40.064 "seek_hole": false, 00:15:40.064 "seek_data": false, 00:15:40.064 "copy": true, 00:15:40.064 "nvme_iov_md": false 00:15:40.064 }, 00:15:40.064 "memory_domains": [ 00:15:40.064 { 00:15:40.064 "dma_device_id": "system", 00:15:40.064 "dma_device_type": 1 00:15:40.064 }, 00:15:40.064 { 00:15:40.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.064 "dma_device_type": 2 00:15:40.064 } 00:15:40.064 ], 00:15:40.064 "driver_specific": {} 00:15:40.064 } 00:15:40.064 ] 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.064 "name": "Existed_Raid", 00:15:40.064 "uuid": "c208bdeb-ab52-47c4-b72e-f1eec191c659", 00:15:40.064 "strip_size_kb": 0, 00:15:40.064 "state": "online", 00:15:40.064 "raid_level": "raid1", 00:15:40.064 "superblock": true, 00:15:40.064 "num_base_bdevs": 3, 00:15:40.064 "num_base_bdevs_discovered": 3, 00:15:40.064 "num_base_bdevs_operational": 3, 00:15:40.064 "base_bdevs_list": [ 00:15:40.064 { 00:15:40.064 "name": "BaseBdev1", 00:15:40.064 "uuid": "c3be3efa-c6bc-4b34-8480-134dbda57704", 00:15:40.064 "is_configured": true, 00:15:40.064 "data_offset": 2048, 00:15:40.064 "data_size": 63488 00:15:40.064 }, 00:15:40.064 { 00:15:40.064 "name": "BaseBdev2", 00:15:40.064 "uuid": "981a5901-f5b6-488c-a875-239185823f7c", 00:15:40.064 "is_configured": true, 00:15:40.064 "data_offset": 2048, 00:15:40.064 "data_size": 63488 00:15:40.064 }, 00:15:40.064 { 00:15:40.064 "name": "BaseBdev3", 00:15:40.064 "uuid": "df3e5d7a-77d3-42f6-a68d-4bbf2caaac57", 00:15:40.064 "is_configured": true, 00:15:40.064 "data_offset": 2048, 00:15:40.064 "data_size": 63488 00:15:40.064 } 00:15:40.064 ] 00:15:40.064 }' 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.064 09:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.632 [2024-10-15 09:15:24.399660] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:40.632 "name": "Existed_Raid", 00:15:40.632 "aliases": [ 00:15:40.632 "c208bdeb-ab52-47c4-b72e-f1eec191c659" 00:15:40.632 ], 00:15:40.632 "product_name": "Raid Volume", 00:15:40.632 "block_size": 512, 00:15:40.632 "num_blocks": 63488, 00:15:40.632 "uuid": "c208bdeb-ab52-47c4-b72e-f1eec191c659", 00:15:40.632 "assigned_rate_limits": { 00:15:40.632 "rw_ios_per_sec": 0, 00:15:40.632 "rw_mbytes_per_sec": 0, 00:15:40.632 "r_mbytes_per_sec": 0, 00:15:40.632 "w_mbytes_per_sec": 0 00:15:40.632 }, 00:15:40.632 "claimed": false, 00:15:40.632 "zoned": false, 00:15:40.632 "supported_io_types": { 00:15:40.632 "read": true, 00:15:40.632 "write": true, 00:15:40.632 "unmap": false, 00:15:40.632 "flush": false, 00:15:40.632 "reset": true, 00:15:40.632 "nvme_admin": false, 00:15:40.632 "nvme_io": false, 00:15:40.632 "nvme_io_md": false, 00:15:40.632 "write_zeroes": true, 00:15:40.632 "zcopy": false, 00:15:40.632 "get_zone_info": false, 00:15:40.632 "zone_management": false, 00:15:40.632 "zone_append": false, 00:15:40.632 "compare": false, 00:15:40.632 "compare_and_write": false, 00:15:40.632 "abort": false, 00:15:40.632 "seek_hole": false, 00:15:40.632 "seek_data": false, 00:15:40.632 "copy": false, 00:15:40.632 "nvme_iov_md": false 00:15:40.632 }, 00:15:40.632 "memory_domains": [ 00:15:40.632 { 00:15:40.632 "dma_device_id": "system", 00:15:40.632 "dma_device_type": 1 00:15:40.632 }, 00:15:40.632 { 00:15:40.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.632 "dma_device_type": 2 00:15:40.632 }, 00:15:40.632 { 00:15:40.632 "dma_device_id": "system", 00:15:40.632 "dma_device_type": 1 00:15:40.632 }, 00:15:40.632 { 00:15:40.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.632 "dma_device_type": 2 00:15:40.632 }, 00:15:40.632 { 00:15:40.632 "dma_device_id": "system", 00:15:40.632 "dma_device_type": 1 00:15:40.632 }, 00:15:40.632 { 00:15:40.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.632 "dma_device_type": 2 00:15:40.632 } 00:15:40.632 ], 00:15:40.632 "driver_specific": { 00:15:40.632 "raid": { 00:15:40.632 "uuid": "c208bdeb-ab52-47c4-b72e-f1eec191c659", 00:15:40.632 "strip_size_kb": 0, 00:15:40.632 "state": "online", 00:15:40.632 "raid_level": "raid1", 00:15:40.632 "superblock": true, 00:15:40.632 "num_base_bdevs": 3, 00:15:40.632 "num_base_bdevs_discovered": 3, 00:15:40.632 "num_base_bdevs_operational": 3, 00:15:40.632 "base_bdevs_list": [ 00:15:40.632 { 00:15:40.632 "name": "BaseBdev1", 00:15:40.632 "uuid": "c3be3efa-c6bc-4b34-8480-134dbda57704", 00:15:40.632 "is_configured": true, 00:15:40.632 "data_offset": 2048, 00:15:40.632 "data_size": 63488 00:15:40.632 }, 00:15:40.632 { 00:15:40.632 "name": "BaseBdev2", 00:15:40.632 "uuid": "981a5901-f5b6-488c-a875-239185823f7c", 00:15:40.632 "is_configured": true, 00:15:40.632 "data_offset": 2048, 00:15:40.632 "data_size": 63488 00:15:40.632 }, 00:15:40.632 { 00:15:40.632 "name": "BaseBdev3", 00:15:40.632 "uuid": "df3e5d7a-77d3-42f6-a68d-4bbf2caaac57", 00:15:40.632 "is_configured": true, 00:15:40.632 "data_offset": 2048, 00:15:40.632 "data_size": 63488 00:15:40.632 } 00:15:40.632 ] 00:15:40.632 } 00:15:40.632 } 00:15:40.632 }' 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:40.632 BaseBdev2 00:15:40.632 BaseBdev3' 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.632 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.890 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:40.891 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.891 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.891 [2024-10-15 09:15:24.715404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.891 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.891 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:40.891 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:40.891 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:40.891 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:40.891 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:40.891 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:40.891 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.891 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.891 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.891 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.149 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:41.149 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.149 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.149 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.149 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.149 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.149 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.149 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.149 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.149 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.149 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.149 "name": "Existed_Raid", 00:15:41.149 "uuid": "c208bdeb-ab52-47c4-b72e-f1eec191c659", 00:15:41.149 "strip_size_kb": 0, 00:15:41.149 "state": "online", 00:15:41.149 "raid_level": "raid1", 00:15:41.149 "superblock": true, 00:15:41.149 "num_base_bdevs": 3, 00:15:41.149 "num_base_bdevs_discovered": 2, 00:15:41.149 "num_base_bdevs_operational": 2, 00:15:41.149 "base_bdevs_list": [ 00:15:41.149 { 00:15:41.149 "name": null, 00:15:41.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.149 "is_configured": false, 00:15:41.149 "data_offset": 0, 00:15:41.149 "data_size": 63488 00:15:41.149 }, 00:15:41.149 { 00:15:41.149 "name": "BaseBdev2", 00:15:41.149 "uuid": "981a5901-f5b6-488c-a875-239185823f7c", 00:15:41.149 "is_configured": true, 00:15:41.149 "data_offset": 2048, 00:15:41.149 "data_size": 63488 00:15:41.149 }, 00:15:41.149 { 00:15:41.149 "name": "BaseBdev3", 00:15:41.149 "uuid": "df3e5d7a-77d3-42f6-a68d-4bbf2caaac57", 00:15:41.149 "is_configured": true, 00:15:41.149 "data_offset": 2048, 00:15:41.149 "data_size": 63488 00:15:41.149 } 00:15:41.149 ] 00:15:41.149 }' 00:15:41.149 09:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.149 09:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.723 [2024-10-15 09:15:25.446710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.723 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.723 [2024-10-15 09:15:25.599990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:41.723 [2024-10-15 09:15:25.600209] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.982 [2024-10-15 09:15:25.698263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.982 [2024-10-15 09:15:25.698341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.982 [2024-10-15 09:15:25.698363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.982 BaseBdev2 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.982 [ 00:15:41.982 { 00:15:41.982 "name": "BaseBdev2", 00:15:41.982 "aliases": [ 00:15:41.982 "16bf5d03-dc2d-4134-bf9b-c1cd170f58d0" 00:15:41.982 ], 00:15:41.982 "product_name": "Malloc disk", 00:15:41.982 "block_size": 512, 00:15:41.982 "num_blocks": 65536, 00:15:41.982 "uuid": "16bf5d03-dc2d-4134-bf9b-c1cd170f58d0", 00:15:41.982 "assigned_rate_limits": { 00:15:41.982 "rw_ios_per_sec": 0, 00:15:41.982 "rw_mbytes_per_sec": 0, 00:15:41.982 "r_mbytes_per_sec": 0, 00:15:41.982 "w_mbytes_per_sec": 0 00:15:41.982 }, 00:15:41.982 "claimed": false, 00:15:41.982 "zoned": false, 00:15:41.982 "supported_io_types": { 00:15:41.982 "read": true, 00:15:41.982 "write": true, 00:15:41.982 "unmap": true, 00:15:41.982 "flush": true, 00:15:41.982 "reset": true, 00:15:41.982 "nvme_admin": false, 00:15:41.982 "nvme_io": false, 00:15:41.982 "nvme_io_md": false, 00:15:41.982 "write_zeroes": true, 00:15:41.982 "zcopy": true, 00:15:41.982 "get_zone_info": false, 00:15:41.982 "zone_management": false, 00:15:41.982 "zone_append": false, 00:15:41.982 "compare": false, 00:15:41.982 "compare_and_write": false, 00:15:41.982 "abort": true, 00:15:41.982 "seek_hole": false, 00:15:41.982 "seek_data": false, 00:15:41.982 "copy": true, 00:15:41.982 "nvme_iov_md": false 00:15:41.982 }, 00:15:41.982 "memory_domains": [ 00:15:41.982 { 00:15:41.982 "dma_device_id": "system", 00:15:41.982 "dma_device_type": 1 00:15:41.982 }, 00:15:41.982 { 00:15:41.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.982 "dma_device_type": 2 00:15:41.982 } 00:15:41.982 ], 00:15:41.982 "driver_specific": {} 00:15:41.982 } 00:15:41.982 ] 00:15:41.982 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.983 BaseBdev3 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.983 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.242 [ 00:15:42.242 { 00:15:42.242 "name": "BaseBdev3", 00:15:42.242 "aliases": [ 00:15:42.242 "9942b9fc-1a5c-4dad-a087-853240983666" 00:15:42.242 ], 00:15:42.242 "product_name": "Malloc disk", 00:15:42.242 "block_size": 512, 00:15:42.242 "num_blocks": 65536, 00:15:42.242 "uuid": "9942b9fc-1a5c-4dad-a087-853240983666", 00:15:42.242 "assigned_rate_limits": { 00:15:42.242 "rw_ios_per_sec": 0, 00:15:42.242 "rw_mbytes_per_sec": 0, 00:15:42.242 "r_mbytes_per_sec": 0, 00:15:42.242 "w_mbytes_per_sec": 0 00:15:42.242 }, 00:15:42.242 "claimed": false, 00:15:42.242 "zoned": false, 00:15:42.242 "supported_io_types": { 00:15:42.242 "read": true, 00:15:42.242 "write": true, 00:15:42.242 "unmap": true, 00:15:42.242 "flush": true, 00:15:42.242 "reset": true, 00:15:42.242 "nvme_admin": false, 00:15:42.242 "nvme_io": false, 00:15:42.242 "nvme_io_md": false, 00:15:42.242 "write_zeroes": true, 00:15:42.242 "zcopy": true, 00:15:42.242 "get_zone_info": false, 00:15:42.242 "zone_management": false, 00:15:42.242 "zone_append": false, 00:15:42.242 "compare": false, 00:15:42.242 "compare_and_write": false, 00:15:42.242 "abort": true, 00:15:42.242 "seek_hole": false, 00:15:42.242 "seek_data": false, 00:15:42.242 "copy": true, 00:15:42.242 "nvme_iov_md": false 00:15:42.242 }, 00:15:42.242 "memory_domains": [ 00:15:42.242 { 00:15:42.242 "dma_device_id": "system", 00:15:42.242 "dma_device_type": 1 00:15:42.242 }, 00:15:42.242 { 00:15:42.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.242 "dma_device_type": 2 00:15:42.242 } 00:15:42.242 ], 00:15:42.242 "driver_specific": {} 00:15:42.242 } 00:15:42.242 ] 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.242 [2024-10-15 09:15:25.933001] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:42.242 [2024-10-15 09:15:25.933065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:42.242 [2024-10-15 09:15:25.933104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:42.242 [2024-10-15 09:15:25.935861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.242 "name": "Existed_Raid", 00:15:42.242 "uuid": "0a6efb41-7bc3-416a-ba0b-d41e2f6d9216", 00:15:42.242 "strip_size_kb": 0, 00:15:42.242 "state": "configuring", 00:15:42.242 "raid_level": "raid1", 00:15:42.242 "superblock": true, 00:15:42.242 "num_base_bdevs": 3, 00:15:42.242 "num_base_bdevs_discovered": 2, 00:15:42.242 "num_base_bdevs_operational": 3, 00:15:42.242 "base_bdevs_list": [ 00:15:42.242 { 00:15:42.242 "name": "BaseBdev1", 00:15:42.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.242 "is_configured": false, 00:15:42.242 "data_offset": 0, 00:15:42.242 "data_size": 0 00:15:42.242 }, 00:15:42.242 { 00:15:42.242 "name": "BaseBdev2", 00:15:42.242 "uuid": "16bf5d03-dc2d-4134-bf9b-c1cd170f58d0", 00:15:42.242 "is_configured": true, 00:15:42.242 "data_offset": 2048, 00:15:42.242 "data_size": 63488 00:15:42.242 }, 00:15:42.242 { 00:15:42.242 "name": "BaseBdev3", 00:15:42.242 "uuid": "9942b9fc-1a5c-4dad-a087-853240983666", 00:15:42.242 "is_configured": true, 00:15:42.242 "data_offset": 2048, 00:15:42.242 "data_size": 63488 00:15:42.242 } 00:15:42.242 ] 00:15:42.242 }' 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.242 09:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.810 [2024-10-15 09:15:26.445098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.810 "name": "Existed_Raid", 00:15:42.810 "uuid": "0a6efb41-7bc3-416a-ba0b-d41e2f6d9216", 00:15:42.810 "strip_size_kb": 0, 00:15:42.810 "state": "configuring", 00:15:42.810 "raid_level": "raid1", 00:15:42.810 "superblock": true, 00:15:42.810 "num_base_bdevs": 3, 00:15:42.810 "num_base_bdevs_discovered": 1, 00:15:42.810 "num_base_bdevs_operational": 3, 00:15:42.810 "base_bdevs_list": [ 00:15:42.810 { 00:15:42.810 "name": "BaseBdev1", 00:15:42.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.810 "is_configured": false, 00:15:42.810 "data_offset": 0, 00:15:42.810 "data_size": 0 00:15:42.810 }, 00:15:42.810 { 00:15:42.810 "name": null, 00:15:42.810 "uuid": "16bf5d03-dc2d-4134-bf9b-c1cd170f58d0", 00:15:42.810 "is_configured": false, 00:15:42.810 "data_offset": 0, 00:15:42.810 "data_size": 63488 00:15:42.810 }, 00:15:42.810 { 00:15:42.810 "name": "BaseBdev3", 00:15:42.810 "uuid": "9942b9fc-1a5c-4dad-a087-853240983666", 00:15:42.810 "is_configured": true, 00:15:42.810 "data_offset": 2048, 00:15:42.810 "data_size": 63488 00:15:42.810 } 00:15:42.810 ] 00:15:42.810 }' 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.810 09:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.378 09:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:43.378 09:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.378 [2024-10-15 09:15:27.088933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.378 BaseBdev1 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.378 [ 00:15:43.378 { 00:15:43.378 "name": "BaseBdev1", 00:15:43.378 "aliases": [ 00:15:43.378 "2c53d5b2-234e-4555-a4c9-eae11b9011d9" 00:15:43.378 ], 00:15:43.378 "product_name": "Malloc disk", 00:15:43.378 "block_size": 512, 00:15:43.378 "num_blocks": 65536, 00:15:43.378 "uuid": "2c53d5b2-234e-4555-a4c9-eae11b9011d9", 00:15:43.378 "assigned_rate_limits": { 00:15:43.378 "rw_ios_per_sec": 0, 00:15:43.378 "rw_mbytes_per_sec": 0, 00:15:43.378 "r_mbytes_per_sec": 0, 00:15:43.378 "w_mbytes_per_sec": 0 00:15:43.378 }, 00:15:43.378 "claimed": true, 00:15:43.378 "claim_type": "exclusive_write", 00:15:43.378 "zoned": false, 00:15:43.378 "supported_io_types": { 00:15:43.378 "read": true, 00:15:43.378 "write": true, 00:15:43.378 "unmap": true, 00:15:43.378 "flush": true, 00:15:43.378 "reset": true, 00:15:43.378 "nvme_admin": false, 00:15:43.378 "nvme_io": false, 00:15:43.378 "nvme_io_md": false, 00:15:43.378 "write_zeroes": true, 00:15:43.378 "zcopy": true, 00:15:43.378 "get_zone_info": false, 00:15:43.378 "zone_management": false, 00:15:43.378 "zone_append": false, 00:15:43.378 "compare": false, 00:15:43.378 "compare_and_write": false, 00:15:43.378 "abort": true, 00:15:43.378 "seek_hole": false, 00:15:43.378 "seek_data": false, 00:15:43.378 "copy": true, 00:15:43.378 "nvme_iov_md": false 00:15:43.378 }, 00:15:43.378 "memory_domains": [ 00:15:43.378 { 00:15:43.378 "dma_device_id": "system", 00:15:43.378 "dma_device_type": 1 00:15:43.378 }, 00:15:43.378 { 00:15:43.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.378 "dma_device_type": 2 00:15:43.378 } 00:15:43.378 ], 00:15:43.378 "driver_specific": {} 00:15:43.378 } 00:15:43.378 ] 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.378 "name": "Existed_Raid", 00:15:43.378 "uuid": "0a6efb41-7bc3-416a-ba0b-d41e2f6d9216", 00:15:43.378 "strip_size_kb": 0, 00:15:43.378 "state": "configuring", 00:15:43.378 "raid_level": "raid1", 00:15:43.378 "superblock": true, 00:15:43.378 "num_base_bdevs": 3, 00:15:43.378 "num_base_bdevs_discovered": 2, 00:15:43.378 "num_base_bdevs_operational": 3, 00:15:43.378 "base_bdevs_list": [ 00:15:43.378 { 00:15:43.378 "name": "BaseBdev1", 00:15:43.378 "uuid": "2c53d5b2-234e-4555-a4c9-eae11b9011d9", 00:15:43.378 "is_configured": true, 00:15:43.378 "data_offset": 2048, 00:15:43.378 "data_size": 63488 00:15:43.378 }, 00:15:43.378 { 00:15:43.378 "name": null, 00:15:43.378 "uuid": "16bf5d03-dc2d-4134-bf9b-c1cd170f58d0", 00:15:43.378 "is_configured": false, 00:15:43.378 "data_offset": 0, 00:15:43.378 "data_size": 63488 00:15:43.378 }, 00:15:43.378 { 00:15:43.378 "name": "BaseBdev3", 00:15:43.378 "uuid": "9942b9fc-1a5c-4dad-a087-853240983666", 00:15:43.378 "is_configured": true, 00:15:43.378 "data_offset": 2048, 00:15:43.378 "data_size": 63488 00:15:43.378 } 00:15:43.378 ] 00:15:43.378 }' 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.378 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.945 [2024-10-15 09:15:27.705186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.945 "name": "Existed_Raid", 00:15:43.945 "uuid": "0a6efb41-7bc3-416a-ba0b-d41e2f6d9216", 00:15:43.945 "strip_size_kb": 0, 00:15:43.945 "state": "configuring", 00:15:43.945 "raid_level": "raid1", 00:15:43.945 "superblock": true, 00:15:43.945 "num_base_bdevs": 3, 00:15:43.945 "num_base_bdevs_discovered": 1, 00:15:43.945 "num_base_bdevs_operational": 3, 00:15:43.945 "base_bdevs_list": [ 00:15:43.945 { 00:15:43.945 "name": "BaseBdev1", 00:15:43.945 "uuid": "2c53d5b2-234e-4555-a4c9-eae11b9011d9", 00:15:43.945 "is_configured": true, 00:15:43.945 "data_offset": 2048, 00:15:43.945 "data_size": 63488 00:15:43.945 }, 00:15:43.945 { 00:15:43.945 "name": null, 00:15:43.945 "uuid": "16bf5d03-dc2d-4134-bf9b-c1cd170f58d0", 00:15:43.945 "is_configured": false, 00:15:43.945 "data_offset": 0, 00:15:43.945 "data_size": 63488 00:15:43.945 }, 00:15:43.945 { 00:15:43.945 "name": null, 00:15:43.945 "uuid": "9942b9fc-1a5c-4dad-a087-853240983666", 00:15:43.945 "is_configured": false, 00:15:43.945 "data_offset": 0, 00:15:43.945 "data_size": 63488 00:15:43.945 } 00:15:43.945 ] 00:15:43.945 }' 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.945 09:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.520 [2024-10-15 09:15:28.309595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.520 "name": "Existed_Raid", 00:15:44.520 "uuid": "0a6efb41-7bc3-416a-ba0b-d41e2f6d9216", 00:15:44.520 "strip_size_kb": 0, 00:15:44.520 "state": "configuring", 00:15:44.520 "raid_level": "raid1", 00:15:44.520 "superblock": true, 00:15:44.520 "num_base_bdevs": 3, 00:15:44.520 "num_base_bdevs_discovered": 2, 00:15:44.520 "num_base_bdevs_operational": 3, 00:15:44.520 "base_bdevs_list": [ 00:15:44.520 { 00:15:44.520 "name": "BaseBdev1", 00:15:44.520 "uuid": "2c53d5b2-234e-4555-a4c9-eae11b9011d9", 00:15:44.520 "is_configured": true, 00:15:44.520 "data_offset": 2048, 00:15:44.520 "data_size": 63488 00:15:44.520 }, 00:15:44.520 { 00:15:44.520 "name": null, 00:15:44.520 "uuid": "16bf5d03-dc2d-4134-bf9b-c1cd170f58d0", 00:15:44.520 "is_configured": false, 00:15:44.520 "data_offset": 0, 00:15:44.520 "data_size": 63488 00:15:44.520 }, 00:15:44.520 { 00:15:44.520 "name": "BaseBdev3", 00:15:44.520 "uuid": "9942b9fc-1a5c-4dad-a087-853240983666", 00:15:44.520 "is_configured": true, 00:15:44.520 "data_offset": 2048, 00:15:44.520 "data_size": 63488 00:15:44.520 } 00:15:44.520 ] 00:15:44.520 }' 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.520 09:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.087 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.087 09:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.087 09:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.087 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:45.087 09:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.087 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:45.087 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:45.087 09:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.087 09:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.087 [2024-10-15 09:15:28.909917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:45.087 09:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.087 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:45.087 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.087 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.087 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.087 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.087 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.087 09:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.087 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.087 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.087 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.087 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.087 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.087 09:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.087 09:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.345 09:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.345 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.345 "name": "Existed_Raid", 00:15:45.345 "uuid": "0a6efb41-7bc3-416a-ba0b-d41e2f6d9216", 00:15:45.345 "strip_size_kb": 0, 00:15:45.345 "state": "configuring", 00:15:45.345 "raid_level": "raid1", 00:15:45.345 "superblock": true, 00:15:45.345 "num_base_bdevs": 3, 00:15:45.345 "num_base_bdevs_discovered": 1, 00:15:45.345 "num_base_bdevs_operational": 3, 00:15:45.345 "base_bdevs_list": [ 00:15:45.345 { 00:15:45.345 "name": null, 00:15:45.345 "uuid": "2c53d5b2-234e-4555-a4c9-eae11b9011d9", 00:15:45.345 "is_configured": false, 00:15:45.345 "data_offset": 0, 00:15:45.345 "data_size": 63488 00:15:45.345 }, 00:15:45.345 { 00:15:45.345 "name": null, 00:15:45.345 "uuid": "16bf5d03-dc2d-4134-bf9b-c1cd170f58d0", 00:15:45.345 "is_configured": false, 00:15:45.345 "data_offset": 0, 00:15:45.345 "data_size": 63488 00:15:45.345 }, 00:15:45.345 { 00:15:45.345 "name": "BaseBdev3", 00:15:45.345 "uuid": "9942b9fc-1a5c-4dad-a087-853240983666", 00:15:45.345 "is_configured": true, 00:15:45.345 "data_offset": 2048, 00:15:45.345 "data_size": 63488 00:15:45.345 } 00:15:45.345 ] 00:15:45.345 }' 00:15:45.345 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.345 09:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.604 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.604 09:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.604 09:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.604 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:45.604 09:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.862 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.863 [2024-10-15 09:15:29.570805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.863 "name": "Existed_Raid", 00:15:45.863 "uuid": "0a6efb41-7bc3-416a-ba0b-d41e2f6d9216", 00:15:45.863 "strip_size_kb": 0, 00:15:45.863 "state": "configuring", 00:15:45.863 "raid_level": "raid1", 00:15:45.863 "superblock": true, 00:15:45.863 "num_base_bdevs": 3, 00:15:45.863 "num_base_bdevs_discovered": 2, 00:15:45.863 "num_base_bdevs_operational": 3, 00:15:45.863 "base_bdevs_list": [ 00:15:45.863 { 00:15:45.863 "name": null, 00:15:45.863 "uuid": "2c53d5b2-234e-4555-a4c9-eae11b9011d9", 00:15:45.863 "is_configured": false, 00:15:45.863 "data_offset": 0, 00:15:45.863 "data_size": 63488 00:15:45.863 }, 00:15:45.863 { 00:15:45.863 "name": "BaseBdev2", 00:15:45.863 "uuid": "16bf5d03-dc2d-4134-bf9b-c1cd170f58d0", 00:15:45.863 "is_configured": true, 00:15:45.863 "data_offset": 2048, 00:15:45.863 "data_size": 63488 00:15:45.863 }, 00:15:45.863 { 00:15:45.863 "name": "BaseBdev3", 00:15:45.863 "uuid": "9942b9fc-1a5c-4dad-a087-853240983666", 00:15:45.863 "is_configured": true, 00:15:45.863 "data_offset": 2048, 00:15:45.863 "data_size": 63488 00:15:45.863 } 00:15:45.863 ] 00:15:45.863 }' 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.863 09:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2c53d5b2-234e-4555-a4c9-eae11b9011d9 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.432 [2024-10-15 09:15:30.252678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:46.432 [2024-10-15 09:15:30.253046] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:46.432 [2024-10-15 09:15:30.253068] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:46.432 NewBaseBdev 00:15:46.432 [2024-10-15 09:15:30.253432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:46.432 [2024-10-15 09:15:30.253675] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:46.432 [2024-10-15 09:15:30.253699] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:46.432 [2024-10-15 09:15:30.253872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.432 [ 00:15:46.432 { 00:15:46.432 "name": "NewBaseBdev", 00:15:46.432 "aliases": [ 00:15:46.432 "2c53d5b2-234e-4555-a4c9-eae11b9011d9" 00:15:46.432 ], 00:15:46.432 "product_name": "Malloc disk", 00:15:46.432 "block_size": 512, 00:15:46.432 "num_blocks": 65536, 00:15:46.432 "uuid": "2c53d5b2-234e-4555-a4c9-eae11b9011d9", 00:15:46.432 "assigned_rate_limits": { 00:15:46.432 "rw_ios_per_sec": 0, 00:15:46.432 "rw_mbytes_per_sec": 0, 00:15:46.432 "r_mbytes_per_sec": 0, 00:15:46.432 "w_mbytes_per_sec": 0 00:15:46.432 }, 00:15:46.432 "claimed": true, 00:15:46.432 "claim_type": "exclusive_write", 00:15:46.432 "zoned": false, 00:15:46.432 "supported_io_types": { 00:15:46.432 "read": true, 00:15:46.432 "write": true, 00:15:46.432 "unmap": true, 00:15:46.432 "flush": true, 00:15:46.432 "reset": true, 00:15:46.432 "nvme_admin": false, 00:15:46.432 "nvme_io": false, 00:15:46.432 "nvme_io_md": false, 00:15:46.432 "write_zeroes": true, 00:15:46.432 "zcopy": true, 00:15:46.432 "get_zone_info": false, 00:15:46.432 "zone_management": false, 00:15:46.432 "zone_append": false, 00:15:46.432 "compare": false, 00:15:46.432 "compare_and_write": false, 00:15:46.432 "abort": true, 00:15:46.432 "seek_hole": false, 00:15:46.432 "seek_data": false, 00:15:46.432 "copy": true, 00:15:46.432 "nvme_iov_md": false 00:15:46.432 }, 00:15:46.432 "memory_domains": [ 00:15:46.432 { 00:15:46.432 "dma_device_id": "system", 00:15:46.432 "dma_device_type": 1 00:15:46.432 }, 00:15:46.432 { 00:15:46.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.432 "dma_device_type": 2 00:15:46.432 } 00:15:46.432 ], 00:15:46.432 "driver_specific": {} 00:15:46.432 } 00:15:46.432 ] 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.432 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.432 "name": "Existed_Raid", 00:15:46.432 "uuid": "0a6efb41-7bc3-416a-ba0b-d41e2f6d9216", 00:15:46.432 "strip_size_kb": 0, 00:15:46.432 "state": "online", 00:15:46.432 "raid_level": "raid1", 00:15:46.432 "superblock": true, 00:15:46.432 "num_base_bdevs": 3, 00:15:46.432 "num_base_bdevs_discovered": 3, 00:15:46.432 "num_base_bdevs_operational": 3, 00:15:46.432 "base_bdevs_list": [ 00:15:46.432 { 00:15:46.432 "name": "NewBaseBdev", 00:15:46.432 "uuid": "2c53d5b2-234e-4555-a4c9-eae11b9011d9", 00:15:46.432 "is_configured": true, 00:15:46.432 "data_offset": 2048, 00:15:46.432 "data_size": 63488 00:15:46.432 }, 00:15:46.432 { 00:15:46.432 "name": "BaseBdev2", 00:15:46.432 "uuid": "16bf5d03-dc2d-4134-bf9b-c1cd170f58d0", 00:15:46.432 "is_configured": true, 00:15:46.432 "data_offset": 2048, 00:15:46.432 "data_size": 63488 00:15:46.432 }, 00:15:46.432 { 00:15:46.432 "name": "BaseBdev3", 00:15:46.432 "uuid": "9942b9fc-1a5c-4dad-a087-853240983666", 00:15:46.433 "is_configured": true, 00:15:46.433 "data_offset": 2048, 00:15:46.433 "data_size": 63488 00:15:46.433 } 00:15:46.433 ] 00:15:46.433 }' 00:15:46.433 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.433 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.000 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:47.000 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:47.000 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:47.000 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:47.000 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:47.000 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:47.000 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:47.000 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:47.000 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.000 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.000 [2024-10-15 09:15:30.833343] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.000 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.000 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:47.000 "name": "Existed_Raid", 00:15:47.000 "aliases": [ 00:15:47.000 "0a6efb41-7bc3-416a-ba0b-d41e2f6d9216" 00:15:47.000 ], 00:15:47.000 "product_name": "Raid Volume", 00:15:47.000 "block_size": 512, 00:15:47.000 "num_blocks": 63488, 00:15:47.000 "uuid": "0a6efb41-7bc3-416a-ba0b-d41e2f6d9216", 00:15:47.000 "assigned_rate_limits": { 00:15:47.000 "rw_ios_per_sec": 0, 00:15:47.000 "rw_mbytes_per_sec": 0, 00:15:47.000 "r_mbytes_per_sec": 0, 00:15:47.000 "w_mbytes_per_sec": 0 00:15:47.000 }, 00:15:47.000 "claimed": false, 00:15:47.000 "zoned": false, 00:15:47.000 "supported_io_types": { 00:15:47.000 "read": true, 00:15:47.000 "write": true, 00:15:47.000 "unmap": false, 00:15:47.000 "flush": false, 00:15:47.000 "reset": true, 00:15:47.000 "nvme_admin": false, 00:15:47.000 "nvme_io": false, 00:15:47.000 "nvme_io_md": false, 00:15:47.000 "write_zeroes": true, 00:15:47.000 "zcopy": false, 00:15:47.000 "get_zone_info": false, 00:15:47.000 "zone_management": false, 00:15:47.000 "zone_append": false, 00:15:47.000 "compare": false, 00:15:47.000 "compare_and_write": false, 00:15:47.000 "abort": false, 00:15:47.000 "seek_hole": false, 00:15:47.000 "seek_data": false, 00:15:47.000 "copy": false, 00:15:47.000 "nvme_iov_md": false 00:15:47.000 }, 00:15:47.000 "memory_domains": [ 00:15:47.000 { 00:15:47.000 "dma_device_id": "system", 00:15:47.000 "dma_device_type": 1 00:15:47.000 }, 00:15:47.000 { 00:15:47.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.000 "dma_device_type": 2 00:15:47.000 }, 00:15:47.000 { 00:15:47.000 "dma_device_id": "system", 00:15:47.000 "dma_device_type": 1 00:15:47.000 }, 00:15:47.000 { 00:15:47.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.000 "dma_device_type": 2 00:15:47.000 }, 00:15:47.000 { 00:15:47.000 "dma_device_id": "system", 00:15:47.000 "dma_device_type": 1 00:15:47.000 }, 00:15:47.000 { 00:15:47.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.000 "dma_device_type": 2 00:15:47.000 } 00:15:47.000 ], 00:15:47.000 "driver_specific": { 00:15:47.000 "raid": { 00:15:47.000 "uuid": "0a6efb41-7bc3-416a-ba0b-d41e2f6d9216", 00:15:47.000 "strip_size_kb": 0, 00:15:47.000 "state": "online", 00:15:47.000 "raid_level": "raid1", 00:15:47.000 "superblock": true, 00:15:47.000 "num_base_bdevs": 3, 00:15:47.000 "num_base_bdevs_discovered": 3, 00:15:47.000 "num_base_bdevs_operational": 3, 00:15:47.000 "base_bdevs_list": [ 00:15:47.000 { 00:15:47.000 "name": "NewBaseBdev", 00:15:47.000 "uuid": "2c53d5b2-234e-4555-a4c9-eae11b9011d9", 00:15:47.000 "is_configured": true, 00:15:47.000 "data_offset": 2048, 00:15:47.000 "data_size": 63488 00:15:47.000 }, 00:15:47.000 { 00:15:47.000 "name": "BaseBdev2", 00:15:47.000 "uuid": "16bf5d03-dc2d-4134-bf9b-c1cd170f58d0", 00:15:47.000 "is_configured": true, 00:15:47.000 "data_offset": 2048, 00:15:47.000 "data_size": 63488 00:15:47.000 }, 00:15:47.000 { 00:15:47.000 "name": "BaseBdev3", 00:15:47.000 "uuid": "9942b9fc-1a5c-4dad-a087-853240983666", 00:15:47.000 "is_configured": true, 00:15:47.000 "data_offset": 2048, 00:15:47.000 "data_size": 63488 00:15:47.000 } 00:15:47.000 ] 00:15:47.000 } 00:15:47.000 } 00:15:47.000 }' 00:15:47.000 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.259 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:47.259 BaseBdev2 00:15:47.259 BaseBdev3' 00:15:47.259 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.259 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:47.259 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.259 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:47.259 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.259 09:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.259 09:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.259 [2024-10-15 09:15:31.157021] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:47.259 [2024-10-15 09:15:31.157083] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.259 [2024-10-15 09:15:31.157219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.259 [2024-10-15 09:15:31.157636] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.259 [2024-10-15 09:15:31.157679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68294 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 68294 ']' 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 68294 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:47.259 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68294 00:15:47.518 killing process with pid 68294 00:15:47.518 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:47.518 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:47.518 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68294' 00:15:47.518 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 68294 00:15:47.518 09:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 68294 00:15:47.518 [2024-10-15 09:15:31.196676] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:47.777 [2024-10-15 09:15:31.469057] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.713 ************************************ 00:15:48.713 END TEST raid_state_function_test_sb 00:15:48.713 ************************************ 00:15:48.713 09:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:48.713 00:15:48.713 real 0m12.239s 00:15:48.713 user 0m20.177s 00:15:48.713 sys 0m1.693s 00:15:48.713 09:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:48.713 09:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.713 09:15:32 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:15:48.713 09:15:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:48.713 09:15:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:48.713 09:15:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:49.013 ************************************ 00:15:49.013 START TEST raid_superblock_test 00:15:49.013 ************************************ 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68931 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68931 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 68931 ']' 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:49.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:49.013 09:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.013 [2024-10-15 09:15:32.765944] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:15:49.013 [2024-10-15 09:15:32.766157] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68931 ] 00:15:49.272 [2024-10-15 09:15:32.948059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.272 [2024-10-15 09:15:33.110572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.530 [2024-10-15 09:15:33.389944] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.530 [2024-10-15 09:15:33.390044] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.098 malloc1 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.098 [2024-10-15 09:15:33.839424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:50.098 [2024-10-15 09:15:33.839526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.098 [2024-10-15 09:15:33.839564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:50.098 [2024-10-15 09:15:33.839581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.098 [2024-10-15 09:15:33.842646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.098 [2024-10-15 09:15:33.842689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:50.098 pt1 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.098 malloc2 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.098 [2024-10-15 09:15:33.901963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:50.098 [2024-10-15 09:15:33.902038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.098 [2024-10-15 09:15:33.902076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:50.098 [2024-10-15 09:15:33.902093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.098 [2024-10-15 09:15:33.905078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.098 [2024-10-15 09:15:33.905135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:50.098 pt2 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.098 malloc3 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.098 [2024-10-15 09:15:33.974160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:50.098 [2024-10-15 09:15:33.974228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.098 [2024-10-15 09:15:33.974288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:50.098 [2024-10-15 09:15:33.974305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.098 [2024-10-15 09:15:33.977436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.098 [2024-10-15 09:15:33.977491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:50.098 pt3 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.098 [2024-10-15 09:15:33.982407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:50.098 [2024-10-15 09:15:33.985307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:50.098 [2024-10-15 09:15:33.985427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:50.098 [2024-10-15 09:15:33.985690] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:50.098 [2024-10-15 09:15:33.985714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:50.098 [2024-10-15 09:15:33.986048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:50.098 [2024-10-15 09:15:33.986299] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:50.098 [2024-10-15 09:15:33.986317] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:50.098 [2024-10-15 09:15:33.986617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.098 09:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.098 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.359 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.359 "name": "raid_bdev1", 00:15:50.359 "uuid": "0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e", 00:15:50.359 "strip_size_kb": 0, 00:15:50.359 "state": "online", 00:15:50.359 "raid_level": "raid1", 00:15:50.359 "superblock": true, 00:15:50.359 "num_base_bdevs": 3, 00:15:50.359 "num_base_bdevs_discovered": 3, 00:15:50.359 "num_base_bdevs_operational": 3, 00:15:50.359 "base_bdevs_list": [ 00:15:50.359 { 00:15:50.359 "name": "pt1", 00:15:50.359 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.359 "is_configured": true, 00:15:50.359 "data_offset": 2048, 00:15:50.359 "data_size": 63488 00:15:50.359 }, 00:15:50.359 { 00:15:50.359 "name": "pt2", 00:15:50.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.359 "is_configured": true, 00:15:50.359 "data_offset": 2048, 00:15:50.359 "data_size": 63488 00:15:50.359 }, 00:15:50.359 { 00:15:50.359 "name": "pt3", 00:15:50.359 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.359 "is_configured": true, 00:15:50.359 "data_offset": 2048, 00:15:50.359 "data_size": 63488 00:15:50.359 } 00:15:50.359 ] 00:15:50.359 }' 00:15:50.359 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.359 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.618 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:50.618 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:50.618 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:50.618 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:50.618 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:50.618 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:50.618 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:50.618 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.618 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.618 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:50.618 [2024-10-15 09:15:34.499255] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.618 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:50.895 "name": "raid_bdev1", 00:15:50.895 "aliases": [ 00:15:50.895 "0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e" 00:15:50.895 ], 00:15:50.895 "product_name": "Raid Volume", 00:15:50.895 "block_size": 512, 00:15:50.895 "num_blocks": 63488, 00:15:50.895 "uuid": "0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e", 00:15:50.895 "assigned_rate_limits": { 00:15:50.895 "rw_ios_per_sec": 0, 00:15:50.895 "rw_mbytes_per_sec": 0, 00:15:50.895 "r_mbytes_per_sec": 0, 00:15:50.895 "w_mbytes_per_sec": 0 00:15:50.895 }, 00:15:50.895 "claimed": false, 00:15:50.895 "zoned": false, 00:15:50.895 "supported_io_types": { 00:15:50.895 "read": true, 00:15:50.895 "write": true, 00:15:50.895 "unmap": false, 00:15:50.895 "flush": false, 00:15:50.895 "reset": true, 00:15:50.895 "nvme_admin": false, 00:15:50.895 "nvme_io": false, 00:15:50.895 "nvme_io_md": false, 00:15:50.895 "write_zeroes": true, 00:15:50.895 "zcopy": false, 00:15:50.895 "get_zone_info": false, 00:15:50.895 "zone_management": false, 00:15:50.895 "zone_append": false, 00:15:50.895 "compare": false, 00:15:50.895 "compare_and_write": false, 00:15:50.895 "abort": false, 00:15:50.895 "seek_hole": false, 00:15:50.895 "seek_data": false, 00:15:50.895 "copy": false, 00:15:50.895 "nvme_iov_md": false 00:15:50.895 }, 00:15:50.895 "memory_domains": [ 00:15:50.895 { 00:15:50.895 "dma_device_id": "system", 00:15:50.895 "dma_device_type": 1 00:15:50.895 }, 00:15:50.895 { 00:15:50.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.895 "dma_device_type": 2 00:15:50.895 }, 00:15:50.895 { 00:15:50.895 "dma_device_id": "system", 00:15:50.895 "dma_device_type": 1 00:15:50.895 }, 00:15:50.895 { 00:15:50.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.895 "dma_device_type": 2 00:15:50.895 }, 00:15:50.895 { 00:15:50.895 "dma_device_id": "system", 00:15:50.895 "dma_device_type": 1 00:15:50.895 }, 00:15:50.895 { 00:15:50.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.895 "dma_device_type": 2 00:15:50.895 } 00:15:50.895 ], 00:15:50.895 "driver_specific": { 00:15:50.895 "raid": { 00:15:50.895 "uuid": "0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e", 00:15:50.895 "strip_size_kb": 0, 00:15:50.895 "state": "online", 00:15:50.895 "raid_level": "raid1", 00:15:50.895 "superblock": true, 00:15:50.895 "num_base_bdevs": 3, 00:15:50.895 "num_base_bdevs_discovered": 3, 00:15:50.895 "num_base_bdevs_operational": 3, 00:15:50.895 "base_bdevs_list": [ 00:15:50.895 { 00:15:50.895 "name": "pt1", 00:15:50.895 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.895 "is_configured": true, 00:15:50.895 "data_offset": 2048, 00:15:50.895 "data_size": 63488 00:15:50.895 }, 00:15:50.895 { 00:15:50.895 "name": "pt2", 00:15:50.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.895 "is_configured": true, 00:15:50.895 "data_offset": 2048, 00:15:50.895 "data_size": 63488 00:15:50.895 }, 00:15:50.895 { 00:15:50.895 "name": "pt3", 00:15:50.895 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.895 "is_configured": true, 00:15:50.895 "data_offset": 2048, 00:15:50.895 "data_size": 63488 00:15:50.895 } 00:15:50.895 ] 00:15:50.895 } 00:15:50.895 } 00:15:50.895 }' 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:50.895 pt2 00:15:50.895 pt3' 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.895 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:51.155 [2024-10-15 09:15:34.831184] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e ']' 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.155 [2024-10-15 09:15:34.882798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.155 [2024-10-15 09:15:34.882867] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.155 [2024-10-15 09:15:34.882984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.155 [2024-10-15 09:15:34.883104] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.155 [2024-10-15 09:15:34.883121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.155 09:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.155 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.155 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:51.155 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:51.155 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:51.155 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:51.155 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:51.155 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.155 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:51.155 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.155 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:51.155 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.155 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.155 [2024-10-15 09:15:35.054905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:51.155 [2024-10-15 09:15:35.057729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:51.155 [2024-10-15 09:15:35.057811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:51.155 [2024-10-15 09:15:35.057892] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:51.155 [2024-10-15 09:15:35.057986] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:51.155 [2024-10-15 09:15:35.058029] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:51.155 [2024-10-15 09:15:35.058058] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.155 [2024-10-15 09:15:35.058073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:51.155 request: 00:15:51.155 { 00:15:51.155 "name": "raid_bdev1", 00:15:51.155 "raid_level": "raid1", 00:15:51.155 "base_bdevs": [ 00:15:51.155 "malloc1", 00:15:51.155 "malloc2", 00:15:51.155 "malloc3" 00:15:51.155 ], 00:15:51.155 "superblock": false, 00:15:51.155 "method": "bdev_raid_create", 00:15:51.155 "req_id": 1 00:15:51.155 } 00:15:51.155 Got JSON-RPC error response 00:15:51.155 response: 00:15:51.155 { 00:15:51.155 "code": -17, 00:15:51.155 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:51.155 } 00:15:51.155 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:51.155 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:51.155 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:51.155 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:51.156 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:51.156 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:51.156 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.156 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.156 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.156 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.415 [2024-10-15 09:15:35.122998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:51.415 [2024-10-15 09:15:35.123115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.415 [2024-10-15 09:15:35.123220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:51.415 [2024-10-15 09:15:35.123241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.415 [2024-10-15 09:15:35.126535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.415 [2024-10-15 09:15:35.126577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:51.415 [2024-10-15 09:15:35.126703] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:51.415 [2024-10-15 09:15:35.126778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:51.415 pt1 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.415 "name": "raid_bdev1", 00:15:51.415 "uuid": "0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e", 00:15:51.415 "strip_size_kb": 0, 00:15:51.415 "state": "configuring", 00:15:51.415 "raid_level": "raid1", 00:15:51.415 "superblock": true, 00:15:51.415 "num_base_bdevs": 3, 00:15:51.415 "num_base_bdevs_discovered": 1, 00:15:51.415 "num_base_bdevs_operational": 3, 00:15:51.415 "base_bdevs_list": [ 00:15:51.415 { 00:15:51.415 "name": "pt1", 00:15:51.415 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:51.415 "is_configured": true, 00:15:51.415 "data_offset": 2048, 00:15:51.415 "data_size": 63488 00:15:51.415 }, 00:15:51.415 { 00:15:51.415 "name": null, 00:15:51.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.415 "is_configured": false, 00:15:51.415 "data_offset": 2048, 00:15:51.415 "data_size": 63488 00:15:51.415 }, 00:15:51.415 { 00:15:51.415 "name": null, 00:15:51.415 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.415 "is_configured": false, 00:15:51.415 "data_offset": 2048, 00:15:51.415 "data_size": 63488 00:15:51.415 } 00:15:51.415 ] 00:15:51.415 }' 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.415 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.985 [2024-10-15 09:15:35.655314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:51.985 [2024-10-15 09:15:35.655412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.985 [2024-10-15 09:15:35.655449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:51.985 [2024-10-15 09:15:35.655480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.985 [2024-10-15 09:15:35.656187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.985 [2024-10-15 09:15:35.656233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:51.985 [2024-10-15 09:15:35.656363] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:51.985 [2024-10-15 09:15:35.656399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.985 pt2 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.985 [2024-10-15 09:15:35.663344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.985 "name": "raid_bdev1", 00:15:51.985 "uuid": "0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e", 00:15:51.985 "strip_size_kb": 0, 00:15:51.985 "state": "configuring", 00:15:51.985 "raid_level": "raid1", 00:15:51.985 "superblock": true, 00:15:51.985 "num_base_bdevs": 3, 00:15:51.985 "num_base_bdevs_discovered": 1, 00:15:51.985 "num_base_bdevs_operational": 3, 00:15:51.985 "base_bdevs_list": [ 00:15:51.985 { 00:15:51.985 "name": "pt1", 00:15:51.985 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:51.985 "is_configured": true, 00:15:51.985 "data_offset": 2048, 00:15:51.985 "data_size": 63488 00:15:51.985 }, 00:15:51.985 { 00:15:51.985 "name": null, 00:15:51.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.985 "is_configured": false, 00:15:51.985 "data_offset": 0, 00:15:51.985 "data_size": 63488 00:15:51.985 }, 00:15:51.985 { 00:15:51.985 "name": null, 00:15:51.985 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.985 "is_configured": false, 00:15:51.985 "data_offset": 2048, 00:15:51.985 "data_size": 63488 00:15:51.985 } 00:15:51.985 ] 00:15:51.985 }' 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.985 09:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.554 [2024-10-15 09:15:36.195556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:52.554 [2024-10-15 09:15:36.195687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.554 [2024-10-15 09:15:36.195735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:52.554 [2024-10-15 09:15:36.195757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.554 [2024-10-15 09:15:36.196471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.554 [2024-10-15 09:15:36.196514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:52.554 [2024-10-15 09:15:36.196634] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:52.554 [2024-10-15 09:15:36.196695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:52.554 pt2 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.554 [2024-10-15 09:15:36.207521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:52.554 [2024-10-15 09:15:36.207578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.554 [2024-10-15 09:15:36.207609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:52.554 [2024-10-15 09:15:36.207631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.554 [2024-10-15 09:15:36.208110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.554 [2024-10-15 09:15:36.208164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:52.554 [2024-10-15 09:15:36.208244] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:52.554 [2024-10-15 09:15:36.208287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:52.554 [2024-10-15 09:15:36.208451] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:52.554 [2024-10-15 09:15:36.208476] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:52.554 [2024-10-15 09:15:36.208797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:52.554 [2024-10-15 09:15:36.209009] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:52.554 [2024-10-15 09:15:36.209026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:52.554 [2024-10-15 09:15:36.209224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.554 pt3 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.554 "name": "raid_bdev1", 00:15:52.554 "uuid": "0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e", 00:15:52.554 "strip_size_kb": 0, 00:15:52.554 "state": "online", 00:15:52.554 "raid_level": "raid1", 00:15:52.554 "superblock": true, 00:15:52.554 "num_base_bdevs": 3, 00:15:52.554 "num_base_bdevs_discovered": 3, 00:15:52.554 "num_base_bdevs_operational": 3, 00:15:52.554 "base_bdevs_list": [ 00:15:52.554 { 00:15:52.554 "name": "pt1", 00:15:52.554 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:52.554 "is_configured": true, 00:15:52.554 "data_offset": 2048, 00:15:52.554 "data_size": 63488 00:15:52.554 }, 00:15:52.554 { 00:15:52.554 "name": "pt2", 00:15:52.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.554 "is_configured": true, 00:15:52.554 "data_offset": 2048, 00:15:52.554 "data_size": 63488 00:15:52.554 }, 00:15:52.554 { 00:15:52.554 "name": "pt3", 00:15:52.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:52.554 "is_configured": true, 00:15:52.554 "data_offset": 2048, 00:15:52.554 "data_size": 63488 00:15:52.554 } 00:15:52.554 ] 00:15:52.554 }' 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.554 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.813 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:52.813 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:52.813 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:52.813 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:52.813 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:52.813 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:52.813 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.813 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:52.813 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.813 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.074 [2024-10-15 09:15:36.744308] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.074 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.074 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:53.074 "name": "raid_bdev1", 00:15:53.074 "aliases": [ 00:15:53.074 "0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e" 00:15:53.074 ], 00:15:53.074 "product_name": "Raid Volume", 00:15:53.074 "block_size": 512, 00:15:53.074 "num_blocks": 63488, 00:15:53.074 "uuid": "0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e", 00:15:53.074 "assigned_rate_limits": { 00:15:53.074 "rw_ios_per_sec": 0, 00:15:53.074 "rw_mbytes_per_sec": 0, 00:15:53.074 "r_mbytes_per_sec": 0, 00:15:53.074 "w_mbytes_per_sec": 0 00:15:53.074 }, 00:15:53.074 "claimed": false, 00:15:53.074 "zoned": false, 00:15:53.074 "supported_io_types": { 00:15:53.074 "read": true, 00:15:53.074 "write": true, 00:15:53.074 "unmap": false, 00:15:53.074 "flush": false, 00:15:53.074 "reset": true, 00:15:53.074 "nvme_admin": false, 00:15:53.074 "nvme_io": false, 00:15:53.074 "nvme_io_md": false, 00:15:53.074 "write_zeroes": true, 00:15:53.074 "zcopy": false, 00:15:53.074 "get_zone_info": false, 00:15:53.074 "zone_management": false, 00:15:53.074 "zone_append": false, 00:15:53.074 "compare": false, 00:15:53.074 "compare_and_write": false, 00:15:53.074 "abort": false, 00:15:53.074 "seek_hole": false, 00:15:53.074 "seek_data": false, 00:15:53.074 "copy": false, 00:15:53.074 "nvme_iov_md": false 00:15:53.074 }, 00:15:53.074 "memory_domains": [ 00:15:53.074 { 00:15:53.074 "dma_device_id": "system", 00:15:53.074 "dma_device_type": 1 00:15:53.074 }, 00:15:53.074 { 00:15:53.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.074 "dma_device_type": 2 00:15:53.074 }, 00:15:53.074 { 00:15:53.074 "dma_device_id": "system", 00:15:53.074 "dma_device_type": 1 00:15:53.074 }, 00:15:53.074 { 00:15:53.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.074 "dma_device_type": 2 00:15:53.074 }, 00:15:53.074 { 00:15:53.074 "dma_device_id": "system", 00:15:53.074 "dma_device_type": 1 00:15:53.074 }, 00:15:53.074 { 00:15:53.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.074 "dma_device_type": 2 00:15:53.074 } 00:15:53.074 ], 00:15:53.074 "driver_specific": { 00:15:53.074 "raid": { 00:15:53.074 "uuid": "0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e", 00:15:53.074 "strip_size_kb": 0, 00:15:53.074 "state": "online", 00:15:53.074 "raid_level": "raid1", 00:15:53.074 "superblock": true, 00:15:53.074 "num_base_bdevs": 3, 00:15:53.074 "num_base_bdevs_discovered": 3, 00:15:53.074 "num_base_bdevs_operational": 3, 00:15:53.074 "base_bdevs_list": [ 00:15:53.074 { 00:15:53.074 "name": "pt1", 00:15:53.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:53.074 "is_configured": true, 00:15:53.074 "data_offset": 2048, 00:15:53.074 "data_size": 63488 00:15:53.074 }, 00:15:53.074 { 00:15:53.074 "name": "pt2", 00:15:53.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.074 "is_configured": true, 00:15:53.074 "data_offset": 2048, 00:15:53.074 "data_size": 63488 00:15:53.074 }, 00:15:53.074 { 00:15:53.074 "name": "pt3", 00:15:53.074 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:53.074 "is_configured": true, 00:15:53.074 "data_offset": 2048, 00:15:53.074 "data_size": 63488 00:15:53.074 } 00:15:53.074 ] 00:15:53.074 } 00:15:53.074 } 00:15:53.074 }' 00:15:53.074 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:53.074 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:53.074 pt2 00:15:53.074 pt3' 00:15:53.074 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.074 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:53.074 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.074 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:53.074 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.074 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.075 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.075 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.075 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.075 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.075 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.075 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:53.075 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.075 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.075 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.075 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.075 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.075 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.075 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.075 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:53.338 09:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.338 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.338 09:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.338 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.338 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.338 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.338 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.339 [2024-10-15 09:15:37.060395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e '!=' 0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e ']' 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.339 [2024-10-15 09:15:37.103939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.339 "name": "raid_bdev1", 00:15:53.339 "uuid": "0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e", 00:15:53.339 "strip_size_kb": 0, 00:15:53.339 "state": "online", 00:15:53.339 "raid_level": "raid1", 00:15:53.339 "superblock": true, 00:15:53.339 "num_base_bdevs": 3, 00:15:53.339 "num_base_bdevs_discovered": 2, 00:15:53.339 "num_base_bdevs_operational": 2, 00:15:53.339 "base_bdevs_list": [ 00:15:53.339 { 00:15:53.339 "name": null, 00:15:53.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.339 "is_configured": false, 00:15:53.339 "data_offset": 0, 00:15:53.339 "data_size": 63488 00:15:53.339 }, 00:15:53.339 { 00:15:53.339 "name": "pt2", 00:15:53.339 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.339 "is_configured": true, 00:15:53.339 "data_offset": 2048, 00:15:53.339 "data_size": 63488 00:15:53.339 }, 00:15:53.339 { 00:15:53.339 "name": "pt3", 00:15:53.339 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:53.339 "is_configured": true, 00:15:53.339 "data_offset": 2048, 00:15:53.339 "data_size": 63488 00:15:53.339 } 00:15:53.339 ] 00:15:53.339 }' 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.339 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.923 [2024-10-15 09:15:37.628015] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.923 [2024-10-15 09:15:37.628090] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.923 [2024-10-15 09:15:37.628240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.923 [2024-10-15 09:15:37.628342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.923 [2024-10-15 09:15:37.628369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.923 [2024-10-15 09:15:37.707955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:53.923 [2024-10-15 09:15:37.708039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.923 [2024-10-15 09:15:37.708070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:53.923 [2024-10-15 09:15:37.708093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.923 [2024-10-15 09:15:37.711435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.923 [2024-10-15 09:15:37.711485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:53.923 [2024-10-15 09:15:37.711598] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:53.923 [2024-10-15 09:15:37.711683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:53.923 pt2 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.923 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.923 "name": "raid_bdev1", 00:15:53.923 "uuid": "0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e", 00:15:53.923 "strip_size_kb": 0, 00:15:53.923 "state": "configuring", 00:15:53.923 "raid_level": "raid1", 00:15:53.923 "superblock": true, 00:15:53.923 "num_base_bdevs": 3, 00:15:53.923 "num_base_bdevs_discovered": 1, 00:15:53.923 "num_base_bdevs_operational": 2, 00:15:53.923 "base_bdevs_list": [ 00:15:53.923 { 00:15:53.923 "name": null, 00:15:53.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.923 "is_configured": false, 00:15:53.923 "data_offset": 2048, 00:15:53.923 "data_size": 63488 00:15:53.923 }, 00:15:53.923 { 00:15:53.923 "name": "pt2", 00:15:53.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.923 "is_configured": true, 00:15:53.923 "data_offset": 2048, 00:15:53.923 "data_size": 63488 00:15:53.923 }, 00:15:53.924 { 00:15:53.924 "name": null, 00:15:53.924 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:53.924 "is_configured": false, 00:15:53.924 "data_offset": 2048, 00:15:53.924 "data_size": 63488 00:15:53.924 } 00:15:53.924 ] 00:15:53.924 }' 00:15:53.924 09:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.924 09:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.515 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.516 [2024-10-15 09:15:38.224226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:54.516 [2024-10-15 09:15:38.224343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.516 [2024-10-15 09:15:38.224395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:54.516 [2024-10-15 09:15:38.224428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.516 [2024-10-15 09:15:38.225349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.516 [2024-10-15 09:15:38.225405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:54.516 [2024-10-15 09:15:38.225568] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:54.516 [2024-10-15 09:15:38.225639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:54.516 [2024-10-15 09:15:38.225862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:54.516 [2024-10-15 09:15:38.225887] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:54.516 [2024-10-15 09:15:38.226319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:54.516 [2024-10-15 09:15:38.226575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:54.516 [2024-10-15 09:15:38.226598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:54.516 [2024-10-15 09:15:38.226829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.516 pt3 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.516 "name": "raid_bdev1", 00:15:54.516 "uuid": "0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e", 00:15:54.516 "strip_size_kb": 0, 00:15:54.516 "state": "online", 00:15:54.516 "raid_level": "raid1", 00:15:54.516 "superblock": true, 00:15:54.516 "num_base_bdevs": 3, 00:15:54.516 "num_base_bdevs_discovered": 2, 00:15:54.516 "num_base_bdevs_operational": 2, 00:15:54.516 "base_bdevs_list": [ 00:15:54.516 { 00:15:54.516 "name": null, 00:15:54.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.516 "is_configured": false, 00:15:54.516 "data_offset": 2048, 00:15:54.516 "data_size": 63488 00:15:54.516 }, 00:15:54.516 { 00:15:54.516 "name": "pt2", 00:15:54.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.516 "is_configured": true, 00:15:54.516 "data_offset": 2048, 00:15:54.516 "data_size": 63488 00:15:54.516 }, 00:15:54.516 { 00:15:54.516 "name": "pt3", 00:15:54.516 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:54.516 "is_configured": true, 00:15:54.516 "data_offset": 2048, 00:15:54.516 "data_size": 63488 00:15:54.516 } 00:15:54.516 ] 00:15:54.516 }' 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.516 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.102 [2024-10-15 09:15:38.780357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.102 [2024-10-15 09:15:38.780404] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.102 [2024-10-15 09:15:38.780526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.102 [2024-10-15 09:15:38.780622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.102 [2024-10-15 09:15:38.780639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.102 [2024-10-15 09:15:38.856357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:55.102 [2024-10-15 09:15:38.856595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.102 [2024-10-15 09:15:38.856642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:55.102 [2024-10-15 09:15:38.856659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.102 [2024-10-15 09:15:38.859808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.102 [2024-10-15 09:15:38.859869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:55.102 [2024-10-15 09:15:38.859995] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:55.102 [2024-10-15 09:15:38.860057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:55.102 [2024-10-15 09:15:38.860246] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:55.102 [2024-10-15 09:15:38.860281] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.102 [2024-10-15 09:15:38.860307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:55.102 [2024-10-15 09:15:38.860377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.102 pt1 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.102 "name": "raid_bdev1", 00:15:55.102 "uuid": "0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e", 00:15:55.102 "strip_size_kb": 0, 00:15:55.102 "state": "configuring", 00:15:55.102 "raid_level": "raid1", 00:15:55.102 "superblock": true, 00:15:55.102 "num_base_bdevs": 3, 00:15:55.102 "num_base_bdevs_discovered": 1, 00:15:55.102 "num_base_bdevs_operational": 2, 00:15:55.102 "base_bdevs_list": [ 00:15:55.102 { 00:15:55.102 "name": null, 00:15:55.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.102 "is_configured": false, 00:15:55.102 "data_offset": 2048, 00:15:55.102 "data_size": 63488 00:15:55.102 }, 00:15:55.102 { 00:15:55.102 "name": "pt2", 00:15:55.102 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.102 "is_configured": true, 00:15:55.102 "data_offset": 2048, 00:15:55.102 "data_size": 63488 00:15:55.102 }, 00:15:55.102 { 00:15:55.102 "name": null, 00:15:55.102 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.102 "is_configured": false, 00:15:55.102 "data_offset": 2048, 00:15:55.102 "data_size": 63488 00:15:55.102 } 00:15:55.102 ] 00:15:55.102 }' 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.102 09:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.669 [2024-10-15 09:15:39.448736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:55.669 [2024-10-15 09:15:39.448951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.669 [2024-10-15 09:15:39.449031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:55.669 [2024-10-15 09:15:39.449269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.669 [2024-10-15 09:15:39.449911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.669 [2024-10-15 09:15:39.449938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:55.669 [2024-10-15 09:15:39.450055] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:55.669 [2024-10-15 09:15:39.450137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:55.669 [2024-10-15 09:15:39.450309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:55.669 [2024-10-15 09:15:39.450326] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:55.669 [2024-10-15 09:15:39.450690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:55.669 [2024-10-15 09:15:39.450897] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:55.669 [2024-10-15 09:15:39.450918] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:55.669 [2024-10-15 09:15:39.451092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.669 pt3 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.669 "name": "raid_bdev1", 00:15:55.669 "uuid": "0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e", 00:15:55.669 "strip_size_kb": 0, 00:15:55.669 "state": "online", 00:15:55.669 "raid_level": "raid1", 00:15:55.669 "superblock": true, 00:15:55.669 "num_base_bdevs": 3, 00:15:55.669 "num_base_bdevs_discovered": 2, 00:15:55.669 "num_base_bdevs_operational": 2, 00:15:55.669 "base_bdevs_list": [ 00:15:55.669 { 00:15:55.669 "name": null, 00:15:55.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.669 "is_configured": false, 00:15:55.669 "data_offset": 2048, 00:15:55.669 "data_size": 63488 00:15:55.669 }, 00:15:55.669 { 00:15:55.669 "name": "pt2", 00:15:55.669 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.669 "is_configured": true, 00:15:55.669 "data_offset": 2048, 00:15:55.669 "data_size": 63488 00:15:55.669 }, 00:15:55.669 { 00:15:55.669 "name": "pt3", 00:15:55.669 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.669 "is_configured": true, 00:15:55.669 "data_offset": 2048, 00:15:55.669 "data_size": 63488 00:15:55.669 } 00:15:55.669 ] 00:15:55.669 }' 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.669 09:15:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.236 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:56.236 09:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:56.236 09:15:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.236 09:15:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.236 09:15:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.236 09:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:56.236 09:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.236 09:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:56.236 09:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.236 09:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.236 [2024-10-15 09:15:40.029245] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.236 09:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.236 09:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e '!=' 0bfba7fe-9dd6-4ab8-967d-2a1f393e8a1e ']' 00:15:56.236 09:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68931 00:15:56.236 09:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 68931 ']' 00:15:56.236 09:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 68931 00:15:56.236 09:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:56.236 09:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:56.236 09:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68931 00:15:56.236 killing process with pid 68931 00:15:56.236 09:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:56.236 09:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:56.236 09:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68931' 00:15:56.236 09:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 68931 00:15:56.236 [2024-10-15 09:15:40.101747] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.236 09:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 68931 00:15:56.236 [2024-10-15 09:15:40.101883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.236 [2024-10-15 09:15:40.101991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.236 [2024-10-15 09:15:40.102012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:56.495 [2024-10-15 09:15:40.394615] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.903 ************************************ 00:15:57.903 END TEST raid_superblock_test 00:15:57.903 ************************************ 00:15:57.903 09:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:57.903 00:15:57.903 real 0m8.916s 00:15:57.903 user 0m14.463s 00:15:57.903 sys 0m1.305s 00:15:57.903 09:15:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:57.903 09:15:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.903 09:15:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:15:57.903 09:15:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:57.903 09:15:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:57.903 09:15:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:57.903 ************************************ 00:15:57.903 START TEST raid_read_error_test 00:15:57.903 ************************************ 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.i9VLTIeUiY 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69391 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69391 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 69391 ']' 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:57.903 09:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.903 [2024-10-15 09:15:41.747818] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:15:57.903 [2024-10-15 09:15:41.748027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69391 ] 00:15:58.161 [2024-10-15 09:15:41.927284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.161 [2024-10-15 09:15:42.085165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.420 [2024-10-15 09:15:42.328261] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.420 [2024-10-15 09:15:42.328374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.988 BaseBdev1_malloc 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.988 true 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.988 [2024-10-15 09:15:42.822435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:58.988 [2024-10-15 09:15:42.822502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.988 [2024-10-15 09:15:42.822537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:58.988 [2024-10-15 09:15:42.822556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.988 [2024-10-15 09:15:42.825453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.988 [2024-10-15 09:15:42.825499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:58.988 BaseBdev1 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.988 BaseBdev2_malloc 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.988 true 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.988 [2024-10-15 09:15:42.885172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:58.988 [2024-10-15 09:15:42.885265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.988 [2024-10-15 09:15:42.885292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:58.988 [2024-10-15 09:15:42.885311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.988 [2024-10-15 09:15:42.888550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.988 [2024-10-15 09:15:42.888608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:58.988 BaseBdev2 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.988 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.248 BaseBdev3_malloc 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.248 true 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.248 [2024-10-15 09:15:42.959967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:59.248 [2024-10-15 09:15:42.960047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.248 [2024-10-15 09:15:42.960074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:59.248 [2024-10-15 09:15:42.960093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.248 [2024-10-15 09:15:42.963279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.248 [2024-10-15 09:15:42.963323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:59.248 BaseBdev3 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.248 [2024-10-15 09:15:42.968231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.248 [2024-10-15 09:15:42.970916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.248 [2024-10-15 09:15:42.971030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:59.248 [2024-10-15 09:15:42.971346] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:59.248 [2024-10-15 09:15:42.971365] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:59.248 [2024-10-15 09:15:42.971725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:59.248 [2024-10-15 09:15:42.971964] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:59.248 [2024-10-15 09:15:42.971985] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:59.248 [2024-10-15 09:15:42.972272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.248 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.249 09:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.249 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.249 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.249 09:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.249 09:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.249 "name": "raid_bdev1", 00:15:59.249 "uuid": "9f714834-fe98-4107-b3e4-2490ddc90ed4", 00:15:59.249 "strip_size_kb": 0, 00:15:59.249 "state": "online", 00:15:59.249 "raid_level": "raid1", 00:15:59.249 "superblock": true, 00:15:59.249 "num_base_bdevs": 3, 00:15:59.249 "num_base_bdevs_discovered": 3, 00:15:59.249 "num_base_bdevs_operational": 3, 00:15:59.249 "base_bdevs_list": [ 00:15:59.249 { 00:15:59.249 "name": "BaseBdev1", 00:15:59.249 "uuid": "dafd8e59-29bf-5b32-9925-bd530ebcdef9", 00:15:59.249 "is_configured": true, 00:15:59.249 "data_offset": 2048, 00:15:59.249 "data_size": 63488 00:15:59.249 }, 00:15:59.249 { 00:15:59.249 "name": "BaseBdev2", 00:15:59.249 "uuid": "c4eb0d39-1f26-51aa-90fc-d52fbf5f4230", 00:15:59.249 "is_configured": true, 00:15:59.249 "data_offset": 2048, 00:15:59.249 "data_size": 63488 00:15:59.249 }, 00:15:59.249 { 00:15:59.249 "name": "BaseBdev3", 00:15:59.249 "uuid": "35b9e8f3-f2ff-5f71-ac08-00ee425f9ef5", 00:15:59.249 "is_configured": true, 00:15:59.249 "data_offset": 2048, 00:15:59.249 "data_size": 63488 00:15:59.249 } 00:15:59.249 ] 00:15:59.249 }' 00:15:59.249 09:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.249 09:15:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.816 09:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:59.816 09:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:59.816 [2024-10-15 09:15:43.601924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.752 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.752 "name": "raid_bdev1", 00:16:00.752 "uuid": "9f714834-fe98-4107-b3e4-2490ddc90ed4", 00:16:00.752 "strip_size_kb": 0, 00:16:00.752 "state": "online", 00:16:00.752 "raid_level": "raid1", 00:16:00.752 "superblock": true, 00:16:00.752 "num_base_bdevs": 3, 00:16:00.752 "num_base_bdevs_discovered": 3, 00:16:00.752 "num_base_bdevs_operational": 3, 00:16:00.752 "base_bdevs_list": [ 00:16:00.752 { 00:16:00.752 "name": "BaseBdev1", 00:16:00.752 "uuid": "dafd8e59-29bf-5b32-9925-bd530ebcdef9", 00:16:00.752 "is_configured": true, 00:16:00.752 "data_offset": 2048, 00:16:00.752 "data_size": 63488 00:16:00.752 }, 00:16:00.752 { 00:16:00.752 "name": "BaseBdev2", 00:16:00.752 "uuid": "c4eb0d39-1f26-51aa-90fc-d52fbf5f4230", 00:16:00.752 "is_configured": true, 00:16:00.752 "data_offset": 2048, 00:16:00.752 "data_size": 63488 00:16:00.752 }, 00:16:00.752 { 00:16:00.752 "name": "BaseBdev3", 00:16:00.753 "uuid": "35b9e8f3-f2ff-5f71-ac08-00ee425f9ef5", 00:16:00.753 "is_configured": true, 00:16:00.753 "data_offset": 2048, 00:16:00.753 "data_size": 63488 00:16:00.753 } 00:16:00.753 ] 00:16:00.753 }' 00:16:00.753 09:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.753 09:15:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.320 09:15:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.320 09:15:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.320 09:15:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.320 [2024-10-15 09:15:45.018851] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.320 [2024-10-15 09:15:45.018892] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.320 [2024-10-15 09:15:45.022659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.320 [2024-10-15 09:15:45.022744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.320 [2024-10-15 09:15:45.022963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.320 [2024-10-15 09:15:45.022983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:01.320 { 00:16:01.320 "results": [ 00:16:01.320 { 00:16:01.320 "job": "raid_bdev1", 00:16:01.320 "core_mask": "0x1", 00:16:01.320 "workload": "randrw", 00:16:01.320 "percentage": 50, 00:16:01.320 "status": "finished", 00:16:01.320 "queue_depth": 1, 00:16:01.320 "io_size": 131072, 00:16:01.320 "runtime": 1.414459, 00:16:01.320 "iops": 8183.3407684492795, 00:16:01.320 "mibps": 1022.9175960561599, 00:16:01.320 "io_failed": 0, 00:16:01.320 "io_timeout": 0, 00:16:01.320 "avg_latency_us": 117.82564618103278, 00:16:01.320 "min_latency_us": 40.261818181818185, 00:16:01.320 "max_latency_us": 2010.7636363636364 00:16:01.320 } 00:16:01.320 ], 00:16:01.320 "core_count": 1 00:16:01.320 } 00:16:01.320 09:15:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.320 09:15:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69391 00:16:01.320 09:15:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 69391 ']' 00:16:01.320 09:15:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 69391 00:16:01.320 09:15:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:16:01.320 09:15:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:01.320 09:15:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69391 00:16:01.320 09:15:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:01.320 killing process with pid 69391 00:16:01.320 09:15:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:01.320 09:15:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69391' 00:16:01.320 09:15:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 69391 00:16:01.320 [2024-10-15 09:15:45.061998] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:01.320 09:15:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 69391 00:16:01.579 [2024-10-15 09:15:45.287187] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:02.957 09:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.i9VLTIeUiY 00:16:02.957 09:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:02.957 09:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:02.957 09:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:16:02.957 09:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:16:02.957 09:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:02.957 09:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:02.957 09:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:02.957 00:16:02.957 real 0m4.871s 00:16:02.957 user 0m5.944s 00:16:02.957 sys 0m0.678s 00:16:02.957 09:15:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:02.957 ************************************ 00:16:02.957 END TEST raid_read_error_test 00:16:02.957 09:15:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.957 ************************************ 00:16:02.957 09:15:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:16:02.957 09:15:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:02.957 09:15:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:02.957 09:15:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:02.957 ************************************ 00:16:02.957 START TEST raid_write_error_test 00:16:02.957 ************************************ 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0bblq3NASQ 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69532 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69532 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 69532 ']' 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:02.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:02.957 09:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.957 [2024-10-15 09:15:46.649090] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:16:02.957 [2024-10-15 09:15:46.649277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69532 ] 00:16:02.957 [2024-10-15 09:15:46.817332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.279 [2024-10-15 09:15:46.968326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.539 [2024-10-15 09:15:47.195621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.539 [2024-10-15 09:15:47.195716] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.797 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:03.797 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:03.797 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:03.797 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:03.797 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.797 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.797 BaseBdev1_malloc 00:16:03.797 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.797 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:03.797 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.797 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.797 true 00:16:03.797 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.797 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:03.797 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.797 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.797 [2024-10-15 09:15:47.721394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:03.797 [2024-10-15 09:15:47.721462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.797 [2024-10-15 09:15:47.721492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:03.797 [2024-10-15 09:15:47.721517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.056 [2024-10-15 09:15:47.724521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.056 [2024-10-15 09:15:47.724566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:04.056 BaseBdev1 00:16:04.056 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.056 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:04.056 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:04.056 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.056 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.056 BaseBdev2_malloc 00:16:04.056 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.056 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:04.056 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.056 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.056 true 00:16:04.056 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.056 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:04.056 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.056 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.056 [2024-10-15 09:15:47.783327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:04.056 [2024-10-15 09:15:47.783396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.056 [2024-10-15 09:15:47.783423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:04.056 [2024-10-15 09:15:47.783441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.056 [2024-10-15 09:15:47.786521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.056 [2024-10-15 09:15:47.786577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:04.056 BaseBdev2 00:16:04.056 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.057 BaseBdev3_malloc 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.057 true 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.057 [2024-10-15 09:15:47.862459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:04.057 [2024-10-15 09:15:47.862552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.057 [2024-10-15 09:15:47.862578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:04.057 [2024-10-15 09:15:47.862596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.057 [2024-10-15 09:15:47.865694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.057 [2024-10-15 09:15:47.865737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:04.057 BaseBdev3 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.057 [2024-10-15 09:15:47.870611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.057 [2024-10-15 09:15:47.873358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.057 [2024-10-15 09:15:47.873463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:04.057 [2024-10-15 09:15:47.873762] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:04.057 [2024-10-15 09:15:47.873781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:04.057 [2024-10-15 09:15:47.874107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:04.057 [2024-10-15 09:15:47.874358] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:04.057 [2024-10-15 09:15:47.874378] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:04.057 [2024-10-15 09:15:47.874608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.057 "name": "raid_bdev1", 00:16:04.057 "uuid": "c9d22f10-3427-4ced-b1fa-a632043b0d01", 00:16:04.057 "strip_size_kb": 0, 00:16:04.057 "state": "online", 00:16:04.057 "raid_level": "raid1", 00:16:04.057 "superblock": true, 00:16:04.057 "num_base_bdevs": 3, 00:16:04.057 "num_base_bdevs_discovered": 3, 00:16:04.057 "num_base_bdevs_operational": 3, 00:16:04.057 "base_bdevs_list": [ 00:16:04.057 { 00:16:04.057 "name": "BaseBdev1", 00:16:04.057 "uuid": "19171f4a-b0c1-5995-9dd8-f87ad2534477", 00:16:04.057 "is_configured": true, 00:16:04.057 "data_offset": 2048, 00:16:04.057 "data_size": 63488 00:16:04.057 }, 00:16:04.057 { 00:16:04.057 "name": "BaseBdev2", 00:16:04.057 "uuid": "b426d4f8-8a21-5924-8f4d-c68ae783eb02", 00:16:04.057 "is_configured": true, 00:16:04.057 "data_offset": 2048, 00:16:04.057 "data_size": 63488 00:16:04.057 }, 00:16:04.057 { 00:16:04.057 "name": "BaseBdev3", 00:16:04.057 "uuid": "ab2a6ac2-76ef-5c49-97bf-469cd5de26c0", 00:16:04.057 "is_configured": true, 00:16:04.057 "data_offset": 2048, 00:16:04.057 "data_size": 63488 00:16:04.057 } 00:16:04.057 ] 00:16:04.057 }' 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.057 09:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.623 09:15:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:04.623 09:15:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:04.623 [2024-10-15 09:15:48.524500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.559 [2024-10-15 09:15:49.407599] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:05.559 [2024-10-15 09:15:49.407700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:05.559 [2024-10-15 09:15:49.407988] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.559 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.560 09:15:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.560 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.560 09:15:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.560 09:15:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.560 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.560 "name": "raid_bdev1", 00:16:05.560 "uuid": "c9d22f10-3427-4ced-b1fa-a632043b0d01", 00:16:05.560 "strip_size_kb": 0, 00:16:05.560 "state": "online", 00:16:05.560 "raid_level": "raid1", 00:16:05.560 "superblock": true, 00:16:05.560 "num_base_bdevs": 3, 00:16:05.560 "num_base_bdevs_discovered": 2, 00:16:05.560 "num_base_bdevs_operational": 2, 00:16:05.560 "base_bdevs_list": [ 00:16:05.560 { 00:16:05.560 "name": null, 00:16:05.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.560 "is_configured": false, 00:16:05.560 "data_offset": 0, 00:16:05.560 "data_size": 63488 00:16:05.560 }, 00:16:05.560 { 00:16:05.560 "name": "BaseBdev2", 00:16:05.560 "uuid": "b426d4f8-8a21-5924-8f4d-c68ae783eb02", 00:16:05.560 "is_configured": true, 00:16:05.560 "data_offset": 2048, 00:16:05.560 "data_size": 63488 00:16:05.560 }, 00:16:05.560 { 00:16:05.560 "name": "BaseBdev3", 00:16:05.560 "uuid": "ab2a6ac2-76ef-5c49-97bf-469cd5de26c0", 00:16:05.560 "is_configured": true, 00:16:05.560 "data_offset": 2048, 00:16:05.560 "data_size": 63488 00:16:05.560 } 00:16:05.560 ] 00:16:05.560 }' 00:16:05.560 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.560 09:15:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.127 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:06.127 09:15:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.127 09:15:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.127 [2024-10-15 09:15:49.963898] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.127 [2024-10-15 09:15:49.963945] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.127 [2024-10-15 09:15:49.967388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.127 [2024-10-15 09:15:49.967470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.127 [2024-10-15 09:15:49.967614] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.127 [2024-10-15 09:15:49.967639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:06.127 09:15:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.127 { 00:16:06.127 "results": [ 00:16:06.127 { 00:16:06.127 "job": "raid_bdev1", 00:16:06.127 "core_mask": "0x1", 00:16:06.127 "workload": "randrw", 00:16:06.127 "percentage": 50, 00:16:06.127 "status": "finished", 00:16:06.127 "queue_depth": 1, 00:16:06.127 "io_size": 131072, 00:16:06.127 "runtime": 1.436604, 00:16:06.127 "iops": 8774.86071318192, 00:16:06.127 "mibps": 1096.85758914774, 00:16:06.127 "io_failed": 0, 00:16:06.127 "io_timeout": 0, 00:16:06.127 "avg_latency_us": 109.50875109976491, 00:16:06.127 "min_latency_us": 39.79636363636364, 00:16:06.127 "max_latency_us": 2055.447272727273 00:16:06.127 } 00:16:06.127 ], 00:16:06.127 "core_count": 1 00:16:06.127 } 00:16:06.127 09:15:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69532 00:16:06.127 09:15:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 69532 ']' 00:16:06.127 09:15:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 69532 00:16:06.127 09:15:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:16:06.127 09:15:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:06.127 09:15:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69532 00:16:06.127 09:15:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:06.127 09:15:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:06.127 killing process with pid 69532 00:16:06.127 09:15:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69532' 00:16:06.127 09:15:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 69532 00:16:06.127 [2024-10-15 09:15:50.003338] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.127 09:15:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 69532 00:16:06.386 [2024-10-15 09:15:50.227912] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:07.818 09:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:07.818 09:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0bblq3NASQ 00:16:07.818 09:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:07.818 09:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:16:07.818 09:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:16:07.818 09:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:07.818 09:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:07.818 09:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:07.819 00:16:07.819 real 0m4.872s 00:16:07.819 user 0m6.014s 00:16:07.819 sys 0m0.629s 00:16:07.819 09:15:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:07.819 09:15:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.819 ************************************ 00:16:07.819 END TEST raid_write_error_test 00:16:07.819 ************************************ 00:16:07.819 09:15:51 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:16:07.819 09:15:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:07.819 09:15:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:16:07.819 09:15:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:07.819 09:15:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:07.819 09:15:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.819 ************************************ 00:16:07.819 START TEST raid_state_function_test 00:16:07.819 ************************************ 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69681 00:16:07.819 Process raid pid: 69681 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69681' 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69681 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 69681 ']' 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:07.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:07.819 09:15:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.819 [2024-10-15 09:15:51.572969] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:16:07.819 [2024-10-15 09:15:51.573164] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.819 [2024-10-15 09:15:51.740287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.078 [2024-10-15 09:15:51.891777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.337 [2024-10-15 09:15:52.130989] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.337 [2024-10-15 09:15:52.131062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.905 [2024-10-15 09:15:52.638068] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:08.905 [2024-10-15 09:15:52.638146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:08.905 [2024-10-15 09:15:52.638164] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:08.905 [2024-10-15 09:15:52.638181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:08.905 [2024-10-15 09:15:52.638191] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:08.905 [2024-10-15 09:15:52.638206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:08.905 [2024-10-15 09:15:52.638216] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:08.905 [2024-10-15 09:15:52.638230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.905 09:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.905 "name": "Existed_Raid", 00:16:08.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.905 "strip_size_kb": 64, 00:16:08.905 "state": "configuring", 00:16:08.905 "raid_level": "raid0", 00:16:08.905 "superblock": false, 00:16:08.905 "num_base_bdevs": 4, 00:16:08.905 "num_base_bdevs_discovered": 0, 00:16:08.905 "num_base_bdevs_operational": 4, 00:16:08.905 "base_bdevs_list": [ 00:16:08.905 { 00:16:08.905 "name": "BaseBdev1", 00:16:08.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.905 "is_configured": false, 00:16:08.905 "data_offset": 0, 00:16:08.905 "data_size": 0 00:16:08.905 }, 00:16:08.905 { 00:16:08.905 "name": "BaseBdev2", 00:16:08.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.905 "is_configured": false, 00:16:08.905 "data_offset": 0, 00:16:08.905 "data_size": 0 00:16:08.905 }, 00:16:08.905 { 00:16:08.905 "name": "BaseBdev3", 00:16:08.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.905 "is_configured": false, 00:16:08.905 "data_offset": 0, 00:16:08.906 "data_size": 0 00:16:08.906 }, 00:16:08.906 { 00:16:08.906 "name": "BaseBdev4", 00:16:08.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.906 "is_configured": false, 00:16:08.906 "data_offset": 0, 00:16:08.906 "data_size": 0 00:16:08.906 } 00:16:08.906 ] 00:16:08.906 }' 00:16:08.906 09:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.906 09:15:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.474 [2024-10-15 09:15:53.174202] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:09.474 [2024-10-15 09:15:53.174260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.474 [2024-10-15 09:15:53.182202] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:09.474 [2024-10-15 09:15:53.182256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:09.474 [2024-10-15 09:15:53.182271] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:09.474 [2024-10-15 09:15:53.182287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:09.474 [2024-10-15 09:15:53.182297] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:09.474 [2024-10-15 09:15:53.182312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:09.474 [2024-10-15 09:15:53.182322] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:09.474 [2024-10-15 09:15:53.182336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.474 [2024-10-15 09:15:53.232623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.474 BaseBdev1 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.474 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.474 [ 00:16:09.474 { 00:16:09.474 "name": "BaseBdev1", 00:16:09.474 "aliases": [ 00:16:09.474 "47fde0b1-dbfc-4243-8362-acd94b680932" 00:16:09.474 ], 00:16:09.474 "product_name": "Malloc disk", 00:16:09.474 "block_size": 512, 00:16:09.474 "num_blocks": 65536, 00:16:09.474 "uuid": "47fde0b1-dbfc-4243-8362-acd94b680932", 00:16:09.475 "assigned_rate_limits": { 00:16:09.475 "rw_ios_per_sec": 0, 00:16:09.475 "rw_mbytes_per_sec": 0, 00:16:09.475 "r_mbytes_per_sec": 0, 00:16:09.475 "w_mbytes_per_sec": 0 00:16:09.475 }, 00:16:09.475 "claimed": true, 00:16:09.475 "claim_type": "exclusive_write", 00:16:09.475 "zoned": false, 00:16:09.475 "supported_io_types": { 00:16:09.475 "read": true, 00:16:09.475 "write": true, 00:16:09.475 "unmap": true, 00:16:09.475 "flush": true, 00:16:09.475 "reset": true, 00:16:09.475 "nvme_admin": false, 00:16:09.475 "nvme_io": false, 00:16:09.475 "nvme_io_md": false, 00:16:09.475 "write_zeroes": true, 00:16:09.475 "zcopy": true, 00:16:09.475 "get_zone_info": false, 00:16:09.475 "zone_management": false, 00:16:09.475 "zone_append": false, 00:16:09.475 "compare": false, 00:16:09.475 "compare_and_write": false, 00:16:09.475 "abort": true, 00:16:09.475 "seek_hole": false, 00:16:09.475 "seek_data": false, 00:16:09.475 "copy": true, 00:16:09.475 "nvme_iov_md": false 00:16:09.475 }, 00:16:09.475 "memory_domains": [ 00:16:09.475 { 00:16:09.475 "dma_device_id": "system", 00:16:09.475 "dma_device_type": 1 00:16:09.475 }, 00:16:09.475 { 00:16:09.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.475 "dma_device_type": 2 00:16:09.475 } 00:16:09.475 ], 00:16:09.475 "driver_specific": {} 00:16:09.475 } 00:16:09.475 ] 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.475 "name": "Existed_Raid", 00:16:09.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.475 "strip_size_kb": 64, 00:16:09.475 "state": "configuring", 00:16:09.475 "raid_level": "raid0", 00:16:09.475 "superblock": false, 00:16:09.475 "num_base_bdevs": 4, 00:16:09.475 "num_base_bdevs_discovered": 1, 00:16:09.475 "num_base_bdevs_operational": 4, 00:16:09.475 "base_bdevs_list": [ 00:16:09.475 { 00:16:09.475 "name": "BaseBdev1", 00:16:09.475 "uuid": "47fde0b1-dbfc-4243-8362-acd94b680932", 00:16:09.475 "is_configured": true, 00:16:09.475 "data_offset": 0, 00:16:09.475 "data_size": 65536 00:16:09.475 }, 00:16:09.475 { 00:16:09.475 "name": "BaseBdev2", 00:16:09.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.475 "is_configured": false, 00:16:09.475 "data_offset": 0, 00:16:09.475 "data_size": 0 00:16:09.475 }, 00:16:09.475 { 00:16:09.475 "name": "BaseBdev3", 00:16:09.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.475 "is_configured": false, 00:16:09.475 "data_offset": 0, 00:16:09.475 "data_size": 0 00:16:09.475 }, 00:16:09.475 { 00:16:09.475 "name": "BaseBdev4", 00:16:09.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.475 "is_configured": false, 00:16:09.475 "data_offset": 0, 00:16:09.475 "data_size": 0 00:16:09.475 } 00:16:09.475 ] 00:16:09.475 }' 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.475 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.044 [2024-10-15 09:15:53.792860] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:10.044 [2024-10-15 09:15:53.792939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.044 [2024-10-15 09:15:53.800901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.044 [2024-10-15 09:15:53.803529] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:10.044 [2024-10-15 09:15:53.803581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:10.044 [2024-10-15 09:15:53.803596] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:10.044 [2024-10-15 09:15:53.803614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:10.044 [2024-10-15 09:15:53.803624] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:10.044 [2024-10-15 09:15:53.803638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.044 "name": "Existed_Raid", 00:16:10.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.044 "strip_size_kb": 64, 00:16:10.044 "state": "configuring", 00:16:10.044 "raid_level": "raid0", 00:16:10.044 "superblock": false, 00:16:10.044 "num_base_bdevs": 4, 00:16:10.044 "num_base_bdevs_discovered": 1, 00:16:10.044 "num_base_bdevs_operational": 4, 00:16:10.044 "base_bdevs_list": [ 00:16:10.044 { 00:16:10.044 "name": "BaseBdev1", 00:16:10.044 "uuid": "47fde0b1-dbfc-4243-8362-acd94b680932", 00:16:10.044 "is_configured": true, 00:16:10.044 "data_offset": 0, 00:16:10.044 "data_size": 65536 00:16:10.044 }, 00:16:10.044 { 00:16:10.044 "name": "BaseBdev2", 00:16:10.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.044 "is_configured": false, 00:16:10.044 "data_offset": 0, 00:16:10.044 "data_size": 0 00:16:10.044 }, 00:16:10.044 { 00:16:10.044 "name": "BaseBdev3", 00:16:10.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.044 "is_configured": false, 00:16:10.044 "data_offset": 0, 00:16:10.044 "data_size": 0 00:16:10.044 }, 00:16:10.044 { 00:16:10.044 "name": "BaseBdev4", 00:16:10.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.044 "is_configured": false, 00:16:10.044 "data_offset": 0, 00:16:10.044 "data_size": 0 00:16:10.044 } 00:16:10.044 ] 00:16:10.044 }' 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.044 09:15:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.612 [2024-10-15 09:15:54.406727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.612 BaseBdev2 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.612 [ 00:16:10.612 { 00:16:10.612 "name": "BaseBdev2", 00:16:10.612 "aliases": [ 00:16:10.612 "3c4c0929-1e23-4036-82e7-b57e0a430b51" 00:16:10.612 ], 00:16:10.612 "product_name": "Malloc disk", 00:16:10.612 "block_size": 512, 00:16:10.612 "num_blocks": 65536, 00:16:10.612 "uuid": "3c4c0929-1e23-4036-82e7-b57e0a430b51", 00:16:10.612 "assigned_rate_limits": { 00:16:10.612 "rw_ios_per_sec": 0, 00:16:10.612 "rw_mbytes_per_sec": 0, 00:16:10.612 "r_mbytes_per_sec": 0, 00:16:10.612 "w_mbytes_per_sec": 0 00:16:10.612 }, 00:16:10.612 "claimed": true, 00:16:10.612 "claim_type": "exclusive_write", 00:16:10.612 "zoned": false, 00:16:10.612 "supported_io_types": { 00:16:10.612 "read": true, 00:16:10.612 "write": true, 00:16:10.612 "unmap": true, 00:16:10.612 "flush": true, 00:16:10.612 "reset": true, 00:16:10.612 "nvme_admin": false, 00:16:10.612 "nvme_io": false, 00:16:10.612 "nvme_io_md": false, 00:16:10.612 "write_zeroes": true, 00:16:10.612 "zcopy": true, 00:16:10.612 "get_zone_info": false, 00:16:10.612 "zone_management": false, 00:16:10.612 "zone_append": false, 00:16:10.612 "compare": false, 00:16:10.612 "compare_and_write": false, 00:16:10.612 "abort": true, 00:16:10.612 "seek_hole": false, 00:16:10.612 "seek_data": false, 00:16:10.612 "copy": true, 00:16:10.612 "nvme_iov_md": false 00:16:10.612 }, 00:16:10.612 "memory_domains": [ 00:16:10.612 { 00:16:10.612 "dma_device_id": "system", 00:16:10.612 "dma_device_type": 1 00:16:10.612 }, 00:16:10.612 { 00:16:10.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.612 "dma_device_type": 2 00:16:10.612 } 00:16:10.612 ], 00:16:10.612 "driver_specific": {} 00:16:10.612 } 00:16:10.612 ] 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.612 "name": "Existed_Raid", 00:16:10.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.612 "strip_size_kb": 64, 00:16:10.612 "state": "configuring", 00:16:10.612 "raid_level": "raid0", 00:16:10.612 "superblock": false, 00:16:10.612 "num_base_bdevs": 4, 00:16:10.612 "num_base_bdevs_discovered": 2, 00:16:10.612 "num_base_bdevs_operational": 4, 00:16:10.612 "base_bdevs_list": [ 00:16:10.612 { 00:16:10.612 "name": "BaseBdev1", 00:16:10.612 "uuid": "47fde0b1-dbfc-4243-8362-acd94b680932", 00:16:10.612 "is_configured": true, 00:16:10.612 "data_offset": 0, 00:16:10.612 "data_size": 65536 00:16:10.612 }, 00:16:10.612 { 00:16:10.612 "name": "BaseBdev2", 00:16:10.612 "uuid": "3c4c0929-1e23-4036-82e7-b57e0a430b51", 00:16:10.612 "is_configured": true, 00:16:10.612 "data_offset": 0, 00:16:10.612 "data_size": 65536 00:16:10.612 }, 00:16:10.612 { 00:16:10.612 "name": "BaseBdev3", 00:16:10.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.612 "is_configured": false, 00:16:10.612 "data_offset": 0, 00:16:10.612 "data_size": 0 00:16:10.612 }, 00:16:10.612 { 00:16:10.612 "name": "BaseBdev4", 00:16:10.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.612 "is_configured": false, 00:16:10.612 "data_offset": 0, 00:16:10.612 "data_size": 0 00:16:10.612 } 00:16:10.612 ] 00:16:10.612 }' 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.612 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.181 09:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:11.181 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.181 09:15:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.181 [2024-10-15 09:15:55.017274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:11.181 BaseBdev3 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.181 [ 00:16:11.181 { 00:16:11.181 "name": "BaseBdev3", 00:16:11.181 "aliases": [ 00:16:11.181 "d315682a-c952-4597-b9f0-e65b3864c89c" 00:16:11.181 ], 00:16:11.181 "product_name": "Malloc disk", 00:16:11.181 "block_size": 512, 00:16:11.181 "num_blocks": 65536, 00:16:11.181 "uuid": "d315682a-c952-4597-b9f0-e65b3864c89c", 00:16:11.181 "assigned_rate_limits": { 00:16:11.181 "rw_ios_per_sec": 0, 00:16:11.181 "rw_mbytes_per_sec": 0, 00:16:11.181 "r_mbytes_per_sec": 0, 00:16:11.181 "w_mbytes_per_sec": 0 00:16:11.181 }, 00:16:11.181 "claimed": true, 00:16:11.181 "claim_type": "exclusive_write", 00:16:11.181 "zoned": false, 00:16:11.181 "supported_io_types": { 00:16:11.181 "read": true, 00:16:11.181 "write": true, 00:16:11.181 "unmap": true, 00:16:11.181 "flush": true, 00:16:11.181 "reset": true, 00:16:11.181 "nvme_admin": false, 00:16:11.181 "nvme_io": false, 00:16:11.181 "nvme_io_md": false, 00:16:11.181 "write_zeroes": true, 00:16:11.181 "zcopy": true, 00:16:11.181 "get_zone_info": false, 00:16:11.181 "zone_management": false, 00:16:11.181 "zone_append": false, 00:16:11.181 "compare": false, 00:16:11.181 "compare_and_write": false, 00:16:11.181 "abort": true, 00:16:11.181 "seek_hole": false, 00:16:11.181 "seek_data": false, 00:16:11.181 "copy": true, 00:16:11.181 "nvme_iov_md": false 00:16:11.181 }, 00:16:11.181 "memory_domains": [ 00:16:11.181 { 00:16:11.181 "dma_device_id": "system", 00:16:11.181 "dma_device_type": 1 00:16:11.181 }, 00:16:11.181 { 00:16:11.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.181 "dma_device_type": 2 00:16:11.181 } 00:16:11.181 ], 00:16:11.181 "driver_specific": {} 00:16:11.181 } 00:16:11.181 ] 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.181 "name": "Existed_Raid", 00:16:11.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.181 "strip_size_kb": 64, 00:16:11.181 "state": "configuring", 00:16:11.181 "raid_level": "raid0", 00:16:11.181 "superblock": false, 00:16:11.181 "num_base_bdevs": 4, 00:16:11.181 "num_base_bdevs_discovered": 3, 00:16:11.181 "num_base_bdevs_operational": 4, 00:16:11.181 "base_bdevs_list": [ 00:16:11.181 { 00:16:11.181 "name": "BaseBdev1", 00:16:11.181 "uuid": "47fde0b1-dbfc-4243-8362-acd94b680932", 00:16:11.181 "is_configured": true, 00:16:11.181 "data_offset": 0, 00:16:11.181 "data_size": 65536 00:16:11.181 }, 00:16:11.181 { 00:16:11.181 "name": "BaseBdev2", 00:16:11.181 "uuid": "3c4c0929-1e23-4036-82e7-b57e0a430b51", 00:16:11.181 "is_configured": true, 00:16:11.181 "data_offset": 0, 00:16:11.181 "data_size": 65536 00:16:11.181 }, 00:16:11.181 { 00:16:11.181 "name": "BaseBdev3", 00:16:11.181 "uuid": "d315682a-c952-4597-b9f0-e65b3864c89c", 00:16:11.181 "is_configured": true, 00:16:11.181 "data_offset": 0, 00:16:11.181 "data_size": 65536 00:16:11.181 }, 00:16:11.181 { 00:16:11.181 "name": "BaseBdev4", 00:16:11.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.181 "is_configured": false, 00:16:11.181 "data_offset": 0, 00:16:11.181 "data_size": 0 00:16:11.181 } 00:16:11.181 ] 00:16:11.181 }' 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.181 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.748 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:11.748 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.748 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.748 [2024-10-15 09:15:55.596883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:11.748 [2024-10-15 09:15:55.596951] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:11.748 [2024-10-15 09:15:55.596966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:11.748 [2024-10-15 09:15:55.597398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:11.748 [2024-10-15 09:15:55.597628] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:11.748 [2024-10-15 09:15:55.597649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:11.748 BaseBdev4 00:16:11.748 [2024-10-15 09:15:55.598005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.748 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.748 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:11.748 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:11.748 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:11.748 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:11.748 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.749 [ 00:16:11.749 { 00:16:11.749 "name": "BaseBdev4", 00:16:11.749 "aliases": [ 00:16:11.749 "9eea714c-f5d7-4f09-a1ed-6e8aff7f48db" 00:16:11.749 ], 00:16:11.749 "product_name": "Malloc disk", 00:16:11.749 "block_size": 512, 00:16:11.749 "num_blocks": 65536, 00:16:11.749 "uuid": "9eea714c-f5d7-4f09-a1ed-6e8aff7f48db", 00:16:11.749 "assigned_rate_limits": { 00:16:11.749 "rw_ios_per_sec": 0, 00:16:11.749 "rw_mbytes_per_sec": 0, 00:16:11.749 "r_mbytes_per_sec": 0, 00:16:11.749 "w_mbytes_per_sec": 0 00:16:11.749 }, 00:16:11.749 "claimed": true, 00:16:11.749 "claim_type": "exclusive_write", 00:16:11.749 "zoned": false, 00:16:11.749 "supported_io_types": { 00:16:11.749 "read": true, 00:16:11.749 "write": true, 00:16:11.749 "unmap": true, 00:16:11.749 "flush": true, 00:16:11.749 "reset": true, 00:16:11.749 "nvme_admin": false, 00:16:11.749 "nvme_io": false, 00:16:11.749 "nvme_io_md": false, 00:16:11.749 "write_zeroes": true, 00:16:11.749 "zcopy": true, 00:16:11.749 "get_zone_info": false, 00:16:11.749 "zone_management": false, 00:16:11.749 "zone_append": false, 00:16:11.749 "compare": false, 00:16:11.749 "compare_and_write": false, 00:16:11.749 "abort": true, 00:16:11.749 "seek_hole": false, 00:16:11.749 "seek_data": false, 00:16:11.749 "copy": true, 00:16:11.749 "nvme_iov_md": false 00:16:11.749 }, 00:16:11.749 "memory_domains": [ 00:16:11.749 { 00:16:11.749 "dma_device_id": "system", 00:16:11.749 "dma_device_type": 1 00:16:11.749 }, 00:16:11.749 { 00:16:11.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.749 "dma_device_type": 2 00:16:11.749 } 00:16:11.749 ], 00:16:11.749 "driver_specific": {} 00:16:11.749 } 00:16:11.749 ] 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.749 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.007 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.007 "name": "Existed_Raid", 00:16:12.007 "uuid": "e9bf1afd-0b38-4c67-b940-46ce3d05e234", 00:16:12.008 "strip_size_kb": 64, 00:16:12.008 "state": "online", 00:16:12.008 "raid_level": "raid0", 00:16:12.008 "superblock": false, 00:16:12.008 "num_base_bdevs": 4, 00:16:12.008 "num_base_bdevs_discovered": 4, 00:16:12.008 "num_base_bdevs_operational": 4, 00:16:12.008 "base_bdevs_list": [ 00:16:12.008 { 00:16:12.008 "name": "BaseBdev1", 00:16:12.008 "uuid": "47fde0b1-dbfc-4243-8362-acd94b680932", 00:16:12.008 "is_configured": true, 00:16:12.008 "data_offset": 0, 00:16:12.008 "data_size": 65536 00:16:12.008 }, 00:16:12.008 { 00:16:12.008 "name": "BaseBdev2", 00:16:12.008 "uuid": "3c4c0929-1e23-4036-82e7-b57e0a430b51", 00:16:12.008 "is_configured": true, 00:16:12.008 "data_offset": 0, 00:16:12.008 "data_size": 65536 00:16:12.008 }, 00:16:12.008 { 00:16:12.008 "name": "BaseBdev3", 00:16:12.008 "uuid": "d315682a-c952-4597-b9f0-e65b3864c89c", 00:16:12.008 "is_configured": true, 00:16:12.008 "data_offset": 0, 00:16:12.008 "data_size": 65536 00:16:12.008 }, 00:16:12.008 { 00:16:12.008 "name": "BaseBdev4", 00:16:12.008 "uuid": "9eea714c-f5d7-4f09-a1ed-6e8aff7f48db", 00:16:12.008 "is_configured": true, 00:16:12.008 "data_offset": 0, 00:16:12.008 "data_size": 65536 00:16:12.008 } 00:16:12.008 ] 00:16:12.008 }' 00:16:12.008 09:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.008 09:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.267 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:12.267 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:12.267 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:12.267 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:12.267 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:12.267 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:12.267 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:12.267 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:12.267 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.267 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.267 [2024-10-15 09:15:56.161584] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.267 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.526 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:12.526 "name": "Existed_Raid", 00:16:12.526 "aliases": [ 00:16:12.526 "e9bf1afd-0b38-4c67-b940-46ce3d05e234" 00:16:12.526 ], 00:16:12.526 "product_name": "Raid Volume", 00:16:12.526 "block_size": 512, 00:16:12.526 "num_blocks": 262144, 00:16:12.526 "uuid": "e9bf1afd-0b38-4c67-b940-46ce3d05e234", 00:16:12.526 "assigned_rate_limits": { 00:16:12.526 "rw_ios_per_sec": 0, 00:16:12.526 "rw_mbytes_per_sec": 0, 00:16:12.526 "r_mbytes_per_sec": 0, 00:16:12.526 "w_mbytes_per_sec": 0 00:16:12.526 }, 00:16:12.526 "claimed": false, 00:16:12.526 "zoned": false, 00:16:12.526 "supported_io_types": { 00:16:12.526 "read": true, 00:16:12.526 "write": true, 00:16:12.526 "unmap": true, 00:16:12.526 "flush": true, 00:16:12.526 "reset": true, 00:16:12.526 "nvme_admin": false, 00:16:12.526 "nvme_io": false, 00:16:12.526 "nvme_io_md": false, 00:16:12.526 "write_zeroes": true, 00:16:12.526 "zcopy": false, 00:16:12.526 "get_zone_info": false, 00:16:12.526 "zone_management": false, 00:16:12.526 "zone_append": false, 00:16:12.526 "compare": false, 00:16:12.526 "compare_and_write": false, 00:16:12.526 "abort": false, 00:16:12.526 "seek_hole": false, 00:16:12.526 "seek_data": false, 00:16:12.526 "copy": false, 00:16:12.526 "nvme_iov_md": false 00:16:12.526 }, 00:16:12.526 "memory_domains": [ 00:16:12.526 { 00:16:12.526 "dma_device_id": "system", 00:16:12.527 "dma_device_type": 1 00:16:12.527 }, 00:16:12.527 { 00:16:12.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.527 "dma_device_type": 2 00:16:12.527 }, 00:16:12.527 { 00:16:12.527 "dma_device_id": "system", 00:16:12.527 "dma_device_type": 1 00:16:12.527 }, 00:16:12.527 { 00:16:12.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.527 "dma_device_type": 2 00:16:12.527 }, 00:16:12.527 { 00:16:12.527 "dma_device_id": "system", 00:16:12.527 "dma_device_type": 1 00:16:12.527 }, 00:16:12.527 { 00:16:12.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.527 "dma_device_type": 2 00:16:12.527 }, 00:16:12.527 { 00:16:12.527 "dma_device_id": "system", 00:16:12.527 "dma_device_type": 1 00:16:12.527 }, 00:16:12.527 { 00:16:12.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.527 "dma_device_type": 2 00:16:12.527 } 00:16:12.527 ], 00:16:12.527 "driver_specific": { 00:16:12.527 "raid": { 00:16:12.527 "uuid": "e9bf1afd-0b38-4c67-b940-46ce3d05e234", 00:16:12.527 "strip_size_kb": 64, 00:16:12.527 "state": "online", 00:16:12.527 "raid_level": "raid0", 00:16:12.527 "superblock": false, 00:16:12.527 "num_base_bdevs": 4, 00:16:12.527 "num_base_bdevs_discovered": 4, 00:16:12.527 "num_base_bdevs_operational": 4, 00:16:12.527 "base_bdevs_list": [ 00:16:12.527 { 00:16:12.527 "name": "BaseBdev1", 00:16:12.527 "uuid": "47fde0b1-dbfc-4243-8362-acd94b680932", 00:16:12.527 "is_configured": true, 00:16:12.527 "data_offset": 0, 00:16:12.527 "data_size": 65536 00:16:12.527 }, 00:16:12.527 { 00:16:12.527 "name": "BaseBdev2", 00:16:12.527 "uuid": "3c4c0929-1e23-4036-82e7-b57e0a430b51", 00:16:12.527 "is_configured": true, 00:16:12.527 "data_offset": 0, 00:16:12.527 "data_size": 65536 00:16:12.527 }, 00:16:12.527 { 00:16:12.527 "name": "BaseBdev3", 00:16:12.527 "uuid": "d315682a-c952-4597-b9f0-e65b3864c89c", 00:16:12.527 "is_configured": true, 00:16:12.527 "data_offset": 0, 00:16:12.527 "data_size": 65536 00:16:12.527 }, 00:16:12.527 { 00:16:12.527 "name": "BaseBdev4", 00:16:12.527 "uuid": "9eea714c-f5d7-4f09-a1ed-6e8aff7f48db", 00:16:12.527 "is_configured": true, 00:16:12.527 "data_offset": 0, 00:16:12.527 "data_size": 65536 00:16:12.527 } 00:16:12.527 ] 00:16:12.527 } 00:16:12.527 } 00:16:12.527 }' 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:12.527 BaseBdev2 00:16:12.527 BaseBdev3 00:16:12.527 BaseBdev4' 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.527 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.786 [2024-10-15 09:15:56.509348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:12.786 [2024-10-15 09:15:56.509509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.786 [2024-10-15 09:15:56.509735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.786 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.786 "name": "Existed_Raid", 00:16:12.786 "uuid": "e9bf1afd-0b38-4c67-b940-46ce3d05e234", 00:16:12.786 "strip_size_kb": 64, 00:16:12.786 "state": "offline", 00:16:12.786 "raid_level": "raid0", 00:16:12.786 "superblock": false, 00:16:12.786 "num_base_bdevs": 4, 00:16:12.786 "num_base_bdevs_discovered": 3, 00:16:12.786 "num_base_bdevs_operational": 3, 00:16:12.786 "base_bdevs_list": [ 00:16:12.786 { 00:16:12.786 "name": null, 00:16:12.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.786 "is_configured": false, 00:16:12.786 "data_offset": 0, 00:16:12.786 "data_size": 65536 00:16:12.786 }, 00:16:12.786 { 00:16:12.786 "name": "BaseBdev2", 00:16:12.786 "uuid": "3c4c0929-1e23-4036-82e7-b57e0a430b51", 00:16:12.786 "is_configured": true, 00:16:12.786 "data_offset": 0, 00:16:12.786 "data_size": 65536 00:16:12.786 }, 00:16:12.786 { 00:16:12.786 "name": "BaseBdev3", 00:16:12.786 "uuid": "d315682a-c952-4597-b9f0-e65b3864c89c", 00:16:12.786 "is_configured": true, 00:16:12.786 "data_offset": 0, 00:16:12.786 "data_size": 65536 00:16:12.786 }, 00:16:12.786 { 00:16:12.786 "name": "BaseBdev4", 00:16:12.786 "uuid": "9eea714c-f5d7-4f09-a1ed-6e8aff7f48db", 00:16:12.786 "is_configured": true, 00:16:12.786 "data_offset": 0, 00:16:12.786 "data_size": 65536 00:16:12.786 } 00:16:12.786 ] 00:16:12.786 }' 00:16:12.787 09:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.787 09:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.355 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:13.355 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:13.355 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:13.355 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.356 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.356 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.356 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.356 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:13.356 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:13.356 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:13.356 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.356 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.356 [2024-10-15 09:15:57.197158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.615 [2024-10-15 09:15:57.357907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.615 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.615 [2024-10-15 09:15:57.501945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:13.615 [2024-10-15 09:15:57.502167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.874 BaseBdev2 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.874 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.875 [ 00:16:13.875 { 00:16:13.875 "name": "BaseBdev2", 00:16:13.875 "aliases": [ 00:16:13.875 "a9698281-683e-4bdc-8184-5064fb9d385f" 00:16:13.875 ], 00:16:13.875 "product_name": "Malloc disk", 00:16:13.875 "block_size": 512, 00:16:13.875 "num_blocks": 65536, 00:16:13.875 "uuid": "a9698281-683e-4bdc-8184-5064fb9d385f", 00:16:13.875 "assigned_rate_limits": { 00:16:13.875 "rw_ios_per_sec": 0, 00:16:13.875 "rw_mbytes_per_sec": 0, 00:16:13.875 "r_mbytes_per_sec": 0, 00:16:13.875 "w_mbytes_per_sec": 0 00:16:13.875 }, 00:16:13.875 "claimed": false, 00:16:13.875 "zoned": false, 00:16:13.875 "supported_io_types": { 00:16:13.875 "read": true, 00:16:13.875 "write": true, 00:16:13.875 "unmap": true, 00:16:13.875 "flush": true, 00:16:13.875 "reset": true, 00:16:13.875 "nvme_admin": false, 00:16:13.875 "nvme_io": false, 00:16:13.875 "nvme_io_md": false, 00:16:13.875 "write_zeroes": true, 00:16:13.875 "zcopy": true, 00:16:13.875 "get_zone_info": false, 00:16:13.875 "zone_management": false, 00:16:13.875 "zone_append": false, 00:16:13.875 "compare": false, 00:16:13.875 "compare_and_write": false, 00:16:13.875 "abort": true, 00:16:13.875 "seek_hole": false, 00:16:13.875 "seek_data": false, 00:16:13.875 "copy": true, 00:16:13.875 "nvme_iov_md": false 00:16:13.875 }, 00:16:13.875 "memory_domains": [ 00:16:13.875 { 00:16:13.875 "dma_device_id": "system", 00:16:13.875 "dma_device_type": 1 00:16:13.875 }, 00:16:13.875 { 00:16:13.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.875 "dma_device_type": 2 00:16:13.875 } 00:16:13.875 ], 00:16:13.875 "driver_specific": {} 00:16:13.875 } 00:16:13.875 ] 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.875 BaseBdev3 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.875 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.134 [ 00:16:14.134 { 00:16:14.134 "name": "BaseBdev3", 00:16:14.134 "aliases": [ 00:16:14.134 "1602f82e-9609-4482-bc87-17ed1a50d44e" 00:16:14.134 ], 00:16:14.134 "product_name": "Malloc disk", 00:16:14.134 "block_size": 512, 00:16:14.134 "num_blocks": 65536, 00:16:14.134 "uuid": "1602f82e-9609-4482-bc87-17ed1a50d44e", 00:16:14.134 "assigned_rate_limits": { 00:16:14.134 "rw_ios_per_sec": 0, 00:16:14.134 "rw_mbytes_per_sec": 0, 00:16:14.134 "r_mbytes_per_sec": 0, 00:16:14.134 "w_mbytes_per_sec": 0 00:16:14.134 }, 00:16:14.134 "claimed": false, 00:16:14.134 "zoned": false, 00:16:14.134 "supported_io_types": { 00:16:14.134 "read": true, 00:16:14.134 "write": true, 00:16:14.134 "unmap": true, 00:16:14.134 "flush": true, 00:16:14.134 "reset": true, 00:16:14.134 "nvme_admin": false, 00:16:14.134 "nvme_io": false, 00:16:14.134 "nvme_io_md": false, 00:16:14.134 "write_zeroes": true, 00:16:14.134 "zcopy": true, 00:16:14.134 "get_zone_info": false, 00:16:14.134 "zone_management": false, 00:16:14.134 "zone_append": false, 00:16:14.135 "compare": false, 00:16:14.135 "compare_and_write": false, 00:16:14.135 "abort": true, 00:16:14.135 "seek_hole": false, 00:16:14.135 "seek_data": false, 00:16:14.135 "copy": true, 00:16:14.135 "nvme_iov_md": false 00:16:14.135 }, 00:16:14.135 "memory_domains": [ 00:16:14.135 { 00:16:14.135 "dma_device_id": "system", 00:16:14.135 "dma_device_type": 1 00:16:14.135 }, 00:16:14.135 { 00:16:14.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.135 "dma_device_type": 2 00:16:14.135 } 00:16:14.135 ], 00:16:14.135 "driver_specific": {} 00:16:14.135 } 00:16:14.135 ] 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.135 BaseBdev4 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.135 [ 00:16:14.135 { 00:16:14.135 "name": "BaseBdev4", 00:16:14.135 "aliases": [ 00:16:14.135 "81bc174d-787a-407e-a84e-c7631168dc01" 00:16:14.135 ], 00:16:14.135 "product_name": "Malloc disk", 00:16:14.135 "block_size": 512, 00:16:14.135 "num_blocks": 65536, 00:16:14.135 "uuid": "81bc174d-787a-407e-a84e-c7631168dc01", 00:16:14.135 "assigned_rate_limits": { 00:16:14.135 "rw_ios_per_sec": 0, 00:16:14.135 "rw_mbytes_per_sec": 0, 00:16:14.135 "r_mbytes_per_sec": 0, 00:16:14.135 "w_mbytes_per_sec": 0 00:16:14.135 }, 00:16:14.135 "claimed": false, 00:16:14.135 "zoned": false, 00:16:14.135 "supported_io_types": { 00:16:14.135 "read": true, 00:16:14.135 "write": true, 00:16:14.135 "unmap": true, 00:16:14.135 "flush": true, 00:16:14.135 "reset": true, 00:16:14.135 "nvme_admin": false, 00:16:14.135 "nvme_io": false, 00:16:14.135 "nvme_io_md": false, 00:16:14.135 "write_zeroes": true, 00:16:14.135 "zcopy": true, 00:16:14.135 "get_zone_info": false, 00:16:14.135 "zone_management": false, 00:16:14.135 "zone_append": false, 00:16:14.135 "compare": false, 00:16:14.135 "compare_and_write": false, 00:16:14.135 "abort": true, 00:16:14.135 "seek_hole": false, 00:16:14.135 "seek_data": false, 00:16:14.135 "copy": true, 00:16:14.135 "nvme_iov_md": false 00:16:14.135 }, 00:16:14.135 "memory_domains": [ 00:16:14.135 { 00:16:14.135 "dma_device_id": "system", 00:16:14.135 "dma_device_type": 1 00:16:14.135 }, 00:16:14.135 { 00:16:14.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.135 "dma_device_type": 2 00:16:14.135 } 00:16:14.135 ], 00:16:14.135 "driver_specific": {} 00:16:14.135 } 00:16:14.135 ] 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.135 [2024-10-15 09:15:57.894429] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:14.135 [2024-10-15 09:15:57.894660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:14.135 [2024-10-15 09:15:57.894795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:14.135 [2024-10-15 09:15:57.897647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:14.135 [2024-10-15 09:15:57.897732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.135 "name": "Existed_Raid", 00:16:14.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.135 "strip_size_kb": 64, 00:16:14.135 "state": "configuring", 00:16:14.135 "raid_level": "raid0", 00:16:14.135 "superblock": false, 00:16:14.135 "num_base_bdevs": 4, 00:16:14.135 "num_base_bdevs_discovered": 3, 00:16:14.135 "num_base_bdevs_operational": 4, 00:16:14.135 "base_bdevs_list": [ 00:16:14.135 { 00:16:14.135 "name": "BaseBdev1", 00:16:14.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.135 "is_configured": false, 00:16:14.135 "data_offset": 0, 00:16:14.135 "data_size": 0 00:16:14.135 }, 00:16:14.135 { 00:16:14.135 "name": "BaseBdev2", 00:16:14.135 "uuid": "a9698281-683e-4bdc-8184-5064fb9d385f", 00:16:14.135 "is_configured": true, 00:16:14.135 "data_offset": 0, 00:16:14.135 "data_size": 65536 00:16:14.135 }, 00:16:14.135 { 00:16:14.135 "name": "BaseBdev3", 00:16:14.135 "uuid": "1602f82e-9609-4482-bc87-17ed1a50d44e", 00:16:14.135 "is_configured": true, 00:16:14.135 "data_offset": 0, 00:16:14.135 "data_size": 65536 00:16:14.135 }, 00:16:14.135 { 00:16:14.135 "name": "BaseBdev4", 00:16:14.135 "uuid": "81bc174d-787a-407e-a84e-c7631168dc01", 00:16:14.135 "is_configured": true, 00:16:14.135 "data_offset": 0, 00:16:14.135 "data_size": 65536 00:16:14.135 } 00:16:14.135 ] 00:16:14.135 }' 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.135 09:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.750 [2024-10-15 09:15:58.426663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.750 "name": "Existed_Raid", 00:16:14.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.750 "strip_size_kb": 64, 00:16:14.750 "state": "configuring", 00:16:14.750 "raid_level": "raid0", 00:16:14.750 "superblock": false, 00:16:14.750 "num_base_bdevs": 4, 00:16:14.750 "num_base_bdevs_discovered": 2, 00:16:14.750 "num_base_bdevs_operational": 4, 00:16:14.750 "base_bdevs_list": [ 00:16:14.750 { 00:16:14.750 "name": "BaseBdev1", 00:16:14.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.750 "is_configured": false, 00:16:14.750 "data_offset": 0, 00:16:14.750 "data_size": 0 00:16:14.750 }, 00:16:14.750 { 00:16:14.750 "name": null, 00:16:14.750 "uuid": "a9698281-683e-4bdc-8184-5064fb9d385f", 00:16:14.750 "is_configured": false, 00:16:14.750 "data_offset": 0, 00:16:14.750 "data_size": 65536 00:16:14.750 }, 00:16:14.750 { 00:16:14.750 "name": "BaseBdev3", 00:16:14.750 "uuid": "1602f82e-9609-4482-bc87-17ed1a50d44e", 00:16:14.750 "is_configured": true, 00:16:14.750 "data_offset": 0, 00:16:14.750 "data_size": 65536 00:16:14.750 }, 00:16:14.750 { 00:16:14.750 "name": "BaseBdev4", 00:16:14.750 "uuid": "81bc174d-787a-407e-a84e-c7631168dc01", 00:16:14.750 "is_configured": true, 00:16:14.750 "data_offset": 0, 00:16:14.750 "data_size": 65536 00:16:14.750 } 00:16:14.750 ] 00:16:14.750 }' 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.750 09:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.317 09:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:15.317 09:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.317 09:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.317 09:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.317 09:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.317 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:15.317 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:15.317 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.317 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.317 [2024-10-15 09:15:59.051137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.317 BaseBdev1 00:16:15.317 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.317 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:15.317 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:15.317 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:15.317 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:15.317 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:15.317 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:15.317 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:15.317 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.317 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.317 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.317 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:15.317 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.318 [ 00:16:15.318 { 00:16:15.318 "name": "BaseBdev1", 00:16:15.318 "aliases": [ 00:16:15.318 "cd66ded0-847c-4f60-ba3c-35eabd69b63b" 00:16:15.318 ], 00:16:15.318 "product_name": "Malloc disk", 00:16:15.318 "block_size": 512, 00:16:15.318 "num_blocks": 65536, 00:16:15.318 "uuid": "cd66ded0-847c-4f60-ba3c-35eabd69b63b", 00:16:15.318 "assigned_rate_limits": { 00:16:15.318 "rw_ios_per_sec": 0, 00:16:15.318 "rw_mbytes_per_sec": 0, 00:16:15.318 "r_mbytes_per_sec": 0, 00:16:15.318 "w_mbytes_per_sec": 0 00:16:15.318 }, 00:16:15.318 "claimed": true, 00:16:15.318 "claim_type": "exclusive_write", 00:16:15.318 "zoned": false, 00:16:15.318 "supported_io_types": { 00:16:15.318 "read": true, 00:16:15.318 "write": true, 00:16:15.318 "unmap": true, 00:16:15.318 "flush": true, 00:16:15.318 "reset": true, 00:16:15.318 "nvme_admin": false, 00:16:15.318 "nvme_io": false, 00:16:15.318 "nvme_io_md": false, 00:16:15.318 "write_zeroes": true, 00:16:15.318 "zcopy": true, 00:16:15.318 "get_zone_info": false, 00:16:15.318 "zone_management": false, 00:16:15.318 "zone_append": false, 00:16:15.318 "compare": false, 00:16:15.318 "compare_and_write": false, 00:16:15.318 "abort": true, 00:16:15.318 "seek_hole": false, 00:16:15.318 "seek_data": false, 00:16:15.318 "copy": true, 00:16:15.318 "nvme_iov_md": false 00:16:15.318 }, 00:16:15.318 "memory_domains": [ 00:16:15.318 { 00:16:15.318 "dma_device_id": "system", 00:16:15.318 "dma_device_type": 1 00:16:15.318 }, 00:16:15.318 { 00:16:15.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.318 "dma_device_type": 2 00:16:15.318 } 00:16:15.318 ], 00:16:15.318 "driver_specific": {} 00:16:15.318 } 00:16:15.318 ] 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.318 "name": "Existed_Raid", 00:16:15.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.318 "strip_size_kb": 64, 00:16:15.318 "state": "configuring", 00:16:15.318 "raid_level": "raid0", 00:16:15.318 "superblock": false, 00:16:15.318 "num_base_bdevs": 4, 00:16:15.318 "num_base_bdevs_discovered": 3, 00:16:15.318 "num_base_bdevs_operational": 4, 00:16:15.318 "base_bdevs_list": [ 00:16:15.318 { 00:16:15.318 "name": "BaseBdev1", 00:16:15.318 "uuid": "cd66ded0-847c-4f60-ba3c-35eabd69b63b", 00:16:15.318 "is_configured": true, 00:16:15.318 "data_offset": 0, 00:16:15.318 "data_size": 65536 00:16:15.318 }, 00:16:15.318 { 00:16:15.318 "name": null, 00:16:15.318 "uuid": "a9698281-683e-4bdc-8184-5064fb9d385f", 00:16:15.318 "is_configured": false, 00:16:15.318 "data_offset": 0, 00:16:15.318 "data_size": 65536 00:16:15.318 }, 00:16:15.318 { 00:16:15.318 "name": "BaseBdev3", 00:16:15.318 "uuid": "1602f82e-9609-4482-bc87-17ed1a50d44e", 00:16:15.318 "is_configured": true, 00:16:15.318 "data_offset": 0, 00:16:15.318 "data_size": 65536 00:16:15.318 }, 00:16:15.318 { 00:16:15.318 "name": "BaseBdev4", 00:16:15.318 "uuid": "81bc174d-787a-407e-a84e-c7631168dc01", 00:16:15.318 "is_configured": true, 00:16:15.318 "data_offset": 0, 00:16:15.318 "data_size": 65536 00:16:15.318 } 00:16:15.318 ] 00:16:15.318 }' 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.318 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.885 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.885 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.885 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.885 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:15.885 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.885 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:15.885 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:15.885 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.885 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.885 [2024-10-15 09:15:59.651552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:15.885 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.886 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:15.886 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.886 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.886 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:15.886 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.886 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.886 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.886 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.886 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.886 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.886 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.886 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.886 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.886 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.886 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.886 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.886 "name": "Existed_Raid", 00:16:15.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.886 "strip_size_kb": 64, 00:16:15.886 "state": "configuring", 00:16:15.886 "raid_level": "raid0", 00:16:15.886 "superblock": false, 00:16:15.886 "num_base_bdevs": 4, 00:16:15.886 "num_base_bdevs_discovered": 2, 00:16:15.886 "num_base_bdevs_operational": 4, 00:16:15.886 "base_bdevs_list": [ 00:16:15.886 { 00:16:15.886 "name": "BaseBdev1", 00:16:15.886 "uuid": "cd66ded0-847c-4f60-ba3c-35eabd69b63b", 00:16:15.886 "is_configured": true, 00:16:15.886 "data_offset": 0, 00:16:15.886 "data_size": 65536 00:16:15.886 }, 00:16:15.886 { 00:16:15.886 "name": null, 00:16:15.886 "uuid": "a9698281-683e-4bdc-8184-5064fb9d385f", 00:16:15.886 "is_configured": false, 00:16:15.886 "data_offset": 0, 00:16:15.886 "data_size": 65536 00:16:15.886 }, 00:16:15.886 { 00:16:15.886 "name": null, 00:16:15.886 "uuid": "1602f82e-9609-4482-bc87-17ed1a50d44e", 00:16:15.886 "is_configured": false, 00:16:15.886 "data_offset": 0, 00:16:15.886 "data_size": 65536 00:16:15.886 }, 00:16:15.886 { 00:16:15.886 "name": "BaseBdev4", 00:16:15.886 "uuid": "81bc174d-787a-407e-a84e-c7631168dc01", 00:16:15.886 "is_configured": true, 00:16:15.886 "data_offset": 0, 00:16:15.886 "data_size": 65536 00:16:15.886 } 00:16:15.886 ] 00:16:15.886 }' 00:16:15.886 09:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.886 09:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.451 [2024-10-15 09:16:00.227646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.451 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.451 "name": "Existed_Raid", 00:16:16.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.451 "strip_size_kb": 64, 00:16:16.451 "state": "configuring", 00:16:16.451 "raid_level": "raid0", 00:16:16.451 "superblock": false, 00:16:16.451 "num_base_bdevs": 4, 00:16:16.451 "num_base_bdevs_discovered": 3, 00:16:16.451 "num_base_bdevs_operational": 4, 00:16:16.451 "base_bdevs_list": [ 00:16:16.451 { 00:16:16.451 "name": "BaseBdev1", 00:16:16.451 "uuid": "cd66ded0-847c-4f60-ba3c-35eabd69b63b", 00:16:16.451 "is_configured": true, 00:16:16.451 "data_offset": 0, 00:16:16.451 "data_size": 65536 00:16:16.451 }, 00:16:16.451 { 00:16:16.451 "name": null, 00:16:16.451 "uuid": "a9698281-683e-4bdc-8184-5064fb9d385f", 00:16:16.451 "is_configured": false, 00:16:16.451 "data_offset": 0, 00:16:16.451 "data_size": 65536 00:16:16.451 }, 00:16:16.451 { 00:16:16.451 "name": "BaseBdev3", 00:16:16.452 "uuid": "1602f82e-9609-4482-bc87-17ed1a50d44e", 00:16:16.452 "is_configured": true, 00:16:16.452 "data_offset": 0, 00:16:16.452 "data_size": 65536 00:16:16.452 }, 00:16:16.452 { 00:16:16.452 "name": "BaseBdev4", 00:16:16.452 "uuid": "81bc174d-787a-407e-a84e-c7631168dc01", 00:16:16.452 "is_configured": true, 00:16:16.452 "data_offset": 0, 00:16:16.452 "data_size": 65536 00:16:16.452 } 00:16:16.452 ] 00:16:16.452 }' 00:16:16.452 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.452 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.017 [2024-10-15 09:16:00.795921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.017 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.275 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.275 "name": "Existed_Raid", 00:16:17.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.275 "strip_size_kb": 64, 00:16:17.275 "state": "configuring", 00:16:17.275 "raid_level": "raid0", 00:16:17.275 "superblock": false, 00:16:17.275 "num_base_bdevs": 4, 00:16:17.275 "num_base_bdevs_discovered": 2, 00:16:17.275 "num_base_bdevs_operational": 4, 00:16:17.275 "base_bdevs_list": [ 00:16:17.275 { 00:16:17.275 "name": null, 00:16:17.275 "uuid": "cd66ded0-847c-4f60-ba3c-35eabd69b63b", 00:16:17.275 "is_configured": false, 00:16:17.275 "data_offset": 0, 00:16:17.275 "data_size": 65536 00:16:17.275 }, 00:16:17.275 { 00:16:17.275 "name": null, 00:16:17.275 "uuid": "a9698281-683e-4bdc-8184-5064fb9d385f", 00:16:17.275 "is_configured": false, 00:16:17.275 "data_offset": 0, 00:16:17.275 "data_size": 65536 00:16:17.275 }, 00:16:17.275 { 00:16:17.275 "name": "BaseBdev3", 00:16:17.275 "uuid": "1602f82e-9609-4482-bc87-17ed1a50d44e", 00:16:17.275 "is_configured": true, 00:16:17.275 "data_offset": 0, 00:16:17.275 "data_size": 65536 00:16:17.275 }, 00:16:17.275 { 00:16:17.275 "name": "BaseBdev4", 00:16:17.275 "uuid": "81bc174d-787a-407e-a84e-c7631168dc01", 00:16:17.275 "is_configured": true, 00:16:17.275 "data_offset": 0, 00:16:17.275 "data_size": 65536 00:16:17.275 } 00:16:17.275 ] 00:16:17.275 }' 00:16:17.275 09:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.275 09:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.533 09:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.533 09:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.533 09:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.533 09:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:17.533 09:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.792 09:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:17.792 09:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:17.792 09:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.792 09:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.792 [2024-10-15 09:16:01.491932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:17.792 09:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.792 09:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:17.792 09:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.793 09:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.793 09:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:17.793 09:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.793 09:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.793 09:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.793 09:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.793 09:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.793 09:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.793 09:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.793 09:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.793 09:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.793 09:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.793 09:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.793 09:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.793 "name": "Existed_Raid", 00:16:17.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.793 "strip_size_kb": 64, 00:16:17.793 "state": "configuring", 00:16:17.793 "raid_level": "raid0", 00:16:17.793 "superblock": false, 00:16:17.793 "num_base_bdevs": 4, 00:16:17.793 "num_base_bdevs_discovered": 3, 00:16:17.793 "num_base_bdevs_operational": 4, 00:16:17.793 "base_bdevs_list": [ 00:16:17.793 { 00:16:17.793 "name": null, 00:16:17.793 "uuid": "cd66ded0-847c-4f60-ba3c-35eabd69b63b", 00:16:17.793 "is_configured": false, 00:16:17.793 "data_offset": 0, 00:16:17.793 "data_size": 65536 00:16:17.793 }, 00:16:17.793 { 00:16:17.793 "name": "BaseBdev2", 00:16:17.793 "uuid": "a9698281-683e-4bdc-8184-5064fb9d385f", 00:16:17.793 "is_configured": true, 00:16:17.793 "data_offset": 0, 00:16:17.793 "data_size": 65536 00:16:17.793 }, 00:16:17.793 { 00:16:17.793 "name": "BaseBdev3", 00:16:17.793 "uuid": "1602f82e-9609-4482-bc87-17ed1a50d44e", 00:16:17.793 "is_configured": true, 00:16:17.793 "data_offset": 0, 00:16:17.793 "data_size": 65536 00:16:17.793 }, 00:16:17.793 { 00:16:17.793 "name": "BaseBdev4", 00:16:17.793 "uuid": "81bc174d-787a-407e-a84e-c7631168dc01", 00:16:17.793 "is_configured": true, 00:16:17.793 "data_offset": 0, 00:16:17.793 "data_size": 65536 00:16:17.793 } 00:16:17.793 ] 00:16:17.793 }' 00:16:17.793 09:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.793 09:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cd66ded0-847c-4f60-ba3c-35eabd69b63b 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.360 [2024-10-15 09:16:02.204803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:18.360 [2024-10-15 09:16:02.205178] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:18.360 [2024-10-15 09:16:02.205201] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:18.360 [2024-10-15 09:16:02.205581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:18.360 [2024-10-15 09:16:02.205795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:18.360 [2024-10-15 09:16:02.205817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:18.360 [2024-10-15 09:16:02.206225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.360 NewBaseBdev 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.360 [ 00:16:18.360 { 00:16:18.360 "name": "NewBaseBdev", 00:16:18.360 "aliases": [ 00:16:18.360 "cd66ded0-847c-4f60-ba3c-35eabd69b63b" 00:16:18.360 ], 00:16:18.360 "product_name": "Malloc disk", 00:16:18.360 "block_size": 512, 00:16:18.360 "num_blocks": 65536, 00:16:18.360 "uuid": "cd66ded0-847c-4f60-ba3c-35eabd69b63b", 00:16:18.360 "assigned_rate_limits": { 00:16:18.360 "rw_ios_per_sec": 0, 00:16:18.360 "rw_mbytes_per_sec": 0, 00:16:18.360 "r_mbytes_per_sec": 0, 00:16:18.360 "w_mbytes_per_sec": 0 00:16:18.360 }, 00:16:18.360 "claimed": true, 00:16:18.360 "claim_type": "exclusive_write", 00:16:18.360 "zoned": false, 00:16:18.360 "supported_io_types": { 00:16:18.360 "read": true, 00:16:18.360 "write": true, 00:16:18.360 "unmap": true, 00:16:18.360 "flush": true, 00:16:18.360 "reset": true, 00:16:18.360 "nvme_admin": false, 00:16:18.360 "nvme_io": false, 00:16:18.360 "nvme_io_md": false, 00:16:18.360 "write_zeroes": true, 00:16:18.360 "zcopy": true, 00:16:18.360 "get_zone_info": false, 00:16:18.360 "zone_management": false, 00:16:18.360 "zone_append": false, 00:16:18.360 "compare": false, 00:16:18.360 "compare_and_write": false, 00:16:18.360 "abort": true, 00:16:18.360 "seek_hole": false, 00:16:18.360 "seek_data": false, 00:16:18.360 "copy": true, 00:16:18.360 "nvme_iov_md": false 00:16:18.360 }, 00:16:18.360 "memory_domains": [ 00:16:18.360 { 00:16:18.360 "dma_device_id": "system", 00:16:18.360 "dma_device_type": 1 00:16:18.360 }, 00:16:18.360 { 00:16:18.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.360 "dma_device_type": 2 00:16:18.360 } 00:16:18.360 ], 00:16:18.360 "driver_specific": {} 00:16:18.360 } 00:16:18.360 ] 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.360 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.361 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.361 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.361 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.361 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.361 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.634 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.634 "name": "Existed_Raid", 00:16:18.634 "uuid": "f2a9d2a1-1d33-4661-be8a-7129ee85889d", 00:16:18.634 "strip_size_kb": 64, 00:16:18.634 "state": "online", 00:16:18.634 "raid_level": "raid0", 00:16:18.634 "superblock": false, 00:16:18.634 "num_base_bdevs": 4, 00:16:18.634 "num_base_bdevs_discovered": 4, 00:16:18.634 "num_base_bdevs_operational": 4, 00:16:18.634 "base_bdevs_list": [ 00:16:18.634 { 00:16:18.634 "name": "NewBaseBdev", 00:16:18.634 "uuid": "cd66ded0-847c-4f60-ba3c-35eabd69b63b", 00:16:18.634 "is_configured": true, 00:16:18.634 "data_offset": 0, 00:16:18.634 "data_size": 65536 00:16:18.635 }, 00:16:18.635 { 00:16:18.635 "name": "BaseBdev2", 00:16:18.635 "uuid": "a9698281-683e-4bdc-8184-5064fb9d385f", 00:16:18.635 "is_configured": true, 00:16:18.635 "data_offset": 0, 00:16:18.635 "data_size": 65536 00:16:18.635 }, 00:16:18.635 { 00:16:18.635 "name": "BaseBdev3", 00:16:18.635 "uuid": "1602f82e-9609-4482-bc87-17ed1a50d44e", 00:16:18.635 "is_configured": true, 00:16:18.635 "data_offset": 0, 00:16:18.635 "data_size": 65536 00:16:18.635 }, 00:16:18.635 { 00:16:18.635 "name": "BaseBdev4", 00:16:18.635 "uuid": "81bc174d-787a-407e-a84e-c7631168dc01", 00:16:18.635 "is_configured": true, 00:16:18.635 "data_offset": 0, 00:16:18.635 "data_size": 65536 00:16:18.635 } 00:16:18.635 ] 00:16:18.635 }' 00:16:18.635 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.635 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.905 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:18.905 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:18.905 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:18.905 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:18.905 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:18.905 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:18.905 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:18.905 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.905 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.905 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:18.905 [2024-10-15 09:16:02.749579] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.905 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.905 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:18.905 "name": "Existed_Raid", 00:16:18.905 "aliases": [ 00:16:18.905 "f2a9d2a1-1d33-4661-be8a-7129ee85889d" 00:16:18.905 ], 00:16:18.905 "product_name": "Raid Volume", 00:16:18.905 "block_size": 512, 00:16:18.905 "num_blocks": 262144, 00:16:18.905 "uuid": "f2a9d2a1-1d33-4661-be8a-7129ee85889d", 00:16:18.905 "assigned_rate_limits": { 00:16:18.905 "rw_ios_per_sec": 0, 00:16:18.905 "rw_mbytes_per_sec": 0, 00:16:18.905 "r_mbytes_per_sec": 0, 00:16:18.905 "w_mbytes_per_sec": 0 00:16:18.905 }, 00:16:18.905 "claimed": false, 00:16:18.905 "zoned": false, 00:16:18.905 "supported_io_types": { 00:16:18.905 "read": true, 00:16:18.905 "write": true, 00:16:18.905 "unmap": true, 00:16:18.905 "flush": true, 00:16:18.905 "reset": true, 00:16:18.905 "nvme_admin": false, 00:16:18.905 "nvme_io": false, 00:16:18.905 "nvme_io_md": false, 00:16:18.905 "write_zeroes": true, 00:16:18.905 "zcopy": false, 00:16:18.905 "get_zone_info": false, 00:16:18.905 "zone_management": false, 00:16:18.905 "zone_append": false, 00:16:18.905 "compare": false, 00:16:18.905 "compare_and_write": false, 00:16:18.905 "abort": false, 00:16:18.905 "seek_hole": false, 00:16:18.905 "seek_data": false, 00:16:18.905 "copy": false, 00:16:18.905 "nvme_iov_md": false 00:16:18.905 }, 00:16:18.905 "memory_domains": [ 00:16:18.905 { 00:16:18.905 "dma_device_id": "system", 00:16:18.905 "dma_device_type": 1 00:16:18.905 }, 00:16:18.905 { 00:16:18.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.905 "dma_device_type": 2 00:16:18.905 }, 00:16:18.905 { 00:16:18.905 "dma_device_id": "system", 00:16:18.905 "dma_device_type": 1 00:16:18.905 }, 00:16:18.905 { 00:16:18.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.905 "dma_device_type": 2 00:16:18.905 }, 00:16:18.905 { 00:16:18.905 "dma_device_id": "system", 00:16:18.905 "dma_device_type": 1 00:16:18.905 }, 00:16:18.905 { 00:16:18.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.905 "dma_device_type": 2 00:16:18.905 }, 00:16:18.905 { 00:16:18.905 "dma_device_id": "system", 00:16:18.905 "dma_device_type": 1 00:16:18.905 }, 00:16:18.905 { 00:16:18.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.905 "dma_device_type": 2 00:16:18.905 } 00:16:18.905 ], 00:16:18.905 "driver_specific": { 00:16:18.905 "raid": { 00:16:18.905 "uuid": "f2a9d2a1-1d33-4661-be8a-7129ee85889d", 00:16:18.905 "strip_size_kb": 64, 00:16:18.905 "state": "online", 00:16:18.905 "raid_level": "raid0", 00:16:18.905 "superblock": false, 00:16:18.905 "num_base_bdevs": 4, 00:16:18.905 "num_base_bdevs_discovered": 4, 00:16:18.905 "num_base_bdevs_operational": 4, 00:16:18.905 "base_bdevs_list": [ 00:16:18.905 { 00:16:18.905 "name": "NewBaseBdev", 00:16:18.905 "uuid": "cd66ded0-847c-4f60-ba3c-35eabd69b63b", 00:16:18.905 "is_configured": true, 00:16:18.905 "data_offset": 0, 00:16:18.905 "data_size": 65536 00:16:18.905 }, 00:16:18.905 { 00:16:18.905 "name": "BaseBdev2", 00:16:18.905 "uuid": "a9698281-683e-4bdc-8184-5064fb9d385f", 00:16:18.905 "is_configured": true, 00:16:18.905 "data_offset": 0, 00:16:18.905 "data_size": 65536 00:16:18.905 }, 00:16:18.905 { 00:16:18.905 "name": "BaseBdev3", 00:16:18.905 "uuid": "1602f82e-9609-4482-bc87-17ed1a50d44e", 00:16:18.905 "is_configured": true, 00:16:18.905 "data_offset": 0, 00:16:18.905 "data_size": 65536 00:16:18.905 }, 00:16:18.905 { 00:16:18.905 "name": "BaseBdev4", 00:16:18.905 "uuid": "81bc174d-787a-407e-a84e-c7631168dc01", 00:16:18.905 "is_configured": true, 00:16:18.905 "data_offset": 0, 00:16:18.905 "data_size": 65536 00:16:18.905 } 00:16:18.905 ] 00:16:18.905 } 00:16:18.905 } 00:16:18.905 }' 00:16:18.905 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:19.164 BaseBdev2 00:16:19.164 BaseBdev3 00:16:19.164 BaseBdev4' 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.164 09:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.164 09:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:19.164 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.164 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.164 09:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.165 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.165 09:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.165 09:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.165 09:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.165 09:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:19.165 09:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.165 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.165 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.165 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.423 09:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.423 09:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.423 09:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:19.423 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.423 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.423 [2024-10-15 09:16:03.125248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:19.423 [2024-10-15 09:16:03.125300] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.423 [2024-10-15 09:16:03.125419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.423 [2024-10-15 09:16:03.125576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.423 [2024-10-15 09:16:03.125591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:19.423 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.423 09:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69681 00:16:19.423 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 69681 ']' 00:16:19.424 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 69681 00:16:19.424 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:19.424 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:19.424 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69681 00:16:19.424 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:19.424 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:19.424 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69681' 00:16:19.424 killing process with pid 69681 00:16:19.424 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 69681 00:16:19.424 [2024-10-15 09:16:03.164497] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:19.424 09:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 69681 00:16:19.682 [2024-10-15 09:16:03.505330] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:21.059 00:16:21.059 real 0m13.147s 00:16:21.059 user 0m21.773s 00:16:21.059 sys 0m1.838s 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:21.059 ************************************ 00:16:21.059 END TEST raid_state_function_test 00:16:21.059 ************************************ 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.059 09:16:04 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:16:21.059 09:16:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:21.059 09:16:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:21.059 09:16:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.059 ************************************ 00:16:21.059 START TEST raid_state_function_test_sb 00:16:21.059 ************************************ 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:21.059 Process raid pid: 70370 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70370 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70370' 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70370 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 70370 ']' 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:21.059 09:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.059 [2024-10-15 09:16:04.792498] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:16:21.059 [2024-10-15 09:16:04.792932] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.059 [2024-10-15 09:16:04.969810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.318 [2024-10-15 09:16:05.117383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.577 [2024-10-15 09:16:05.348760] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.577 [2024-10-15 09:16:05.348822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.836 09:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:21.836 09:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:21.836 09:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:21.836 09:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.836 09:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.836 [2024-10-15 09:16:05.759236] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:21.836 [2024-10-15 09:16:05.759304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:21.836 [2024-10-15 09:16:05.759323] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:21.836 [2024-10-15 09:16:05.759340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:21.836 [2024-10-15 09:16:05.759351] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:21.836 [2024-10-15 09:16:05.759365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:21.836 [2024-10-15 09:16:05.759376] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:21.836 [2024-10-15 09:16:05.759391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.095 "name": "Existed_Raid", 00:16:22.095 "uuid": "9847963b-ec0e-4ce3-abf8-8e5beac5586d", 00:16:22.095 "strip_size_kb": 64, 00:16:22.095 "state": "configuring", 00:16:22.095 "raid_level": "raid0", 00:16:22.095 "superblock": true, 00:16:22.095 "num_base_bdevs": 4, 00:16:22.095 "num_base_bdevs_discovered": 0, 00:16:22.095 "num_base_bdevs_operational": 4, 00:16:22.095 "base_bdevs_list": [ 00:16:22.095 { 00:16:22.095 "name": "BaseBdev1", 00:16:22.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.095 "is_configured": false, 00:16:22.095 "data_offset": 0, 00:16:22.095 "data_size": 0 00:16:22.095 }, 00:16:22.095 { 00:16:22.095 "name": "BaseBdev2", 00:16:22.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.095 "is_configured": false, 00:16:22.095 "data_offset": 0, 00:16:22.095 "data_size": 0 00:16:22.095 }, 00:16:22.095 { 00:16:22.095 "name": "BaseBdev3", 00:16:22.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.095 "is_configured": false, 00:16:22.095 "data_offset": 0, 00:16:22.095 "data_size": 0 00:16:22.095 }, 00:16:22.095 { 00:16:22.095 "name": "BaseBdev4", 00:16:22.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.095 "is_configured": false, 00:16:22.095 "data_offset": 0, 00:16:22.095 "data_size": 0 00:16:22.095 } 00:16:22.095 ] 00:16:22.095 }' 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.095 09:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.358 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:22.358 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.358 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.358 [2024-10-15 09:16:06.239347] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:22.358 [2024-10-15 09:16:06.239399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:22.358 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.358 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:22.358 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.358 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.358 [2024-10-15 09:16:06.247366] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:22.358 [2024-10-15 09:16:06.247434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:22.358 [2024-10-15 09:16:06.247451] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:22.358 [2024-10-15 09:16:06.247498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:22.358 [2024-10-15 09:16:06.247523] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:22.358 [2024-10-15 09:16:06.247549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:22.358 [2024-10-15 09:16:06.247558] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:22.358 [2024-10-15 09:16:06.247572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:22.358 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.358 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:22.358 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.358 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.636 [2024-10-15 09:16:06.298200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.636 BaseBdev1 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.636 [ 00:16:22.636 { 00:16:22.636 "name": "BaseBdev1", 00:16:22.636 "aliases": [ 00:16:22.636 "bc02c7c4-83a4-413c-8af2-f671c57690ca" 00:16:22.636 ], 00:16:22.636 "product_name": "Malloc disk", 00:16:22.636 "block_size": 512, 00:16:22.636 "num_blocks": 65536, 00:16:22.636 "uuid": "bc02c7c4-83a4-413c-8af2-f671c57690ca", 00:16:22.636 "assigned_rate_limits": { 00:16:22.636 "rw_ios_per_sec": 0, 00:16:22.636 "rw_mbytes_per_sec": 0, 00:16:22.636 "r_mbytes_per_sec": 0, 00:16:22.636 "w_mbytes_per_sec": 0 00:16:22.636 }, 00:16:22.636 "claimed": true, 00:16:22.636 "claim_type": "exclusive_write", 00:16:22.636 "zoned": false, 00:16:22.636 "supported_io_types": { 00:16:22.636 "read": true, 00:16:22.636 "write": true, 00:16:22.636 "unmap": true, 00:16:22.636 "flush": true, 00:16:22.636 "reset": true, 00:16:22.636 "nvme_admin": false, 00:16:22.636 "nvme_io": false, 00:16:22.636 "nvme_io_md": false, 00:16:22.636 "write_zeroes": true, 00:16:22.636 "zcopy": true, 00:16:22.636 "get_zone_info": false, 00:16:22.636 "zone_management": false, 00:16:22.636 "zone_append": false, 00:16:22.636 "compare": false, 00:16:22.636 "compare_and_write": false, 00:16:22.636 "abort": true, 00:16:22.636 "seek_hole": false, 00:16:22.636 "seek_data": false, 00:16:22.636 "copy": true, 00:16:22.636 "nvme_iov_md": false 00:16:22.636 }, 00:16:22.636 "memory_domains": [ 00:16:22.636 { 00:16:22.636 "dma_device_id": "system", 00:16:22.636 "dma_device_type": 1 00:16:22.636 }, 00:16:22.636 { 00:16:22.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.636 "dma_device_type": 2 00:16:22.636 } 00:16:22.636 ], 00:16:22.636 "driver_specific": {} 00:16:22.636 } 00:16:22.636 ] 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.636 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.636 "name": "Existed_Raid", 00:16:22.636 "uuid": "6f1a775e-1b6a-43ef-9e97-695ccf3bd36e", 00:16:22.636 "strip_size_kb": 64, 00:16:22.636 "state": "configuring", 00:16:22.636 "raid_level": "raid0", 00:16:22.636 "superblock": true, 00:16:22.636 "num_base_bdevs": 4, 00:16:22.636 "num_base_bdevs_discovered": 1, 00:16:22.636 "num_base_bdevs_operational": 4, 00:16:22.636 "base_bdevs_list": [ 00:16:22.636 { 00:16:22.636 "name": "BaseBdev1", 00:16:22.636 "uuid": "bc02c7c4-83a4-413c-8af2-f671c57690ca", 00:16:22.636 "is_configured": true, 00:16:22.636 "data_offset": 2048, 00:16:22.636 "data_size": 63488 00:16:22.636 }, 00:16:22.636 { 00:16:22.636 "name": "BaseBdev2", 00:16:22.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.636 "is_configured": false, 00:16:22.636 "data_offset": 0, 00:16:22.636 "data_size": 0 00:16:22.636 }, 00:16:22.636 { 00:16:22.636 "name": "BaseBdev3", 00:16:22.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.637 "is_configured": false, 00:16:22.637 "data_offset": 0, 00:16:22.637 "data_size": 0 00:16:22.637 }, 00:16:22.637 { 00:16:22.637 "name": "BaseBdev4", 00:16:22.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.637 "is_configured": false, 00:16:22.637 "data_offset": 0, 00:16:22.637 "data_size": 0 00:16:22.637 } 00:16:22.637 ] 00:16:22.637 }' 00:16:22.637 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.637 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.206 [2024-10-15 09:16:06.854386] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:23.206 [2024-10-15 09:16:06.854829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.206 [2024-10-15 09:16:06.862506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.206 [2024-10-15 09:16:06.865424] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.206 [2024-10-15 09:16:06.865697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.206 [2024-10-15 09:16:06.865833] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:23.206 [2024-10-15 09:16:06.865981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:23.206 [2024-10-15 09:16:06.866101] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:23.206 [2024-10-15 09:16:06.866186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.206 "name": "Existed_Raid", 00:16:23.206 "uuid": "7f883381-5bba-4fd9-a672-efd7795b51c7", 00:16:23.206 "strip_size_kb": 64, 00:16:23.206 "state": "configuring", 00:16:23.206 "raid_level": "raid0", 00:16:23.206 "superblock": true, 00:16:23.206 "num_base_bdevs": 4, 00:16:23.206 "num_base_bdevs_discovered": 1, 00:16:23.206 "num_base_bdevs_operational": 4, 00:16:23.206 "base_bdevs_list": [ 00:16:23.206 { 00:16:23.206 "name": "BaseBdev1", 00:16:23.206 "uuid": "bc02c7c4-83a4-413c-8af2-f671c57690ca", 00:16:23.206 "is_configured": true, 00:16:23.206 "data_offset": 2048, 00:16:23.206 "data_size": 63488 00:16:23.206 }, 00:16:23.206 { 00:16:23.206 "name": "BaseBdev2", 00:16:23.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.206 "is_configured": false, 00:16:23.206 "data_offset": 0, 00:16:23.206 "data_size": 0 00:16:23.206 }, 00:16:23.206 { 00:16:23.206 "name": "BaseBdev3", 00:16:23.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.206 "is_configured": false, 00:16:23.206 "data_offset": 0, 00:16:23.206 "data_size": 0 00:16:23.206 }, 00:16:23.206 { 00:16:23.206 "name": "BaseBdev4", 00:16:23.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.206 "is_configured": false, 00:16:23.206 "data_offset": 0, 00:16:23.206 "data_size": 0 00:16:23.206 } 00:16:23.206 ] 00:16:23.206 }' 00:16:23.206 09:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.207 09:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.775 [2024-10-15 09:16:07.446153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:23.775 BaseBdev2 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.775 [ 00:16:23.775 { 00:16:23.775 "name": "BaseBdev2", 00:16:23.775 "aliases": [ 00:16:23.775 "c1b776bc-62c1-4da6-9eca-c19ae2d40cb1" 00:16:23.775 ], 00:16:23.775 "product_name": "Malloc disk", 00:16:23.775 "block_size": 512, 00:16:23.775 "num_blocks": 65536, 00:16:23.775 "uuid": "c1b776bc-62c1-4da6-9eca-c19ae2d40cb1", 00:16:23.775 "assigned_rate_limits": { 00:16:23.775 "rw_ios_per_sec": 0, 00:16:23.775 "rw_mbytes_per_sec": 0, 00:16:23.775 "r_mbytes_per_sec": 0, 00:16:23.775 "w_mbytes_per_sec": 0 00:16:23.775 }, 00:16:23.775 "claimed": true, 00:16:23.775 "claim_type": "exclusive_write", 00:16:23.775 "zoned": false, 00:16:23.775 "supported_io_types": { 00:16:23.775 "read": true, 00:16:23.775 "write": true, 00:16:23.775 "unmap": true, 00:16:23.775 "flush": true, 00:16:23.775 "reset": true, 00:16:23.775 "nvme_admin": false, 00:16:23.775 "nvme_io": false, 00:16:23.775 "nvme_io_md": false, 00:16:23.775 "write_zeroes": true, 00:16:23.775 "zcopy": true, 00:16:23.775 "get_zone_info": false, 00:16:23.775 "zone_management": false, 00:16:23.775 "zone_append": false, 00:16:23.775 "compare": false, 00:16:23.775 "compare_and_write": false, 00:16:23.775 "abort": true, 00:16:23.775 "seek_hole": false, 00:16:23.775 "seek_data": false, 00:16:23.775 "copy": true, 00:16:23.775 "nvme_iov_md": false 00:16:23.775 }, 00:16:23.775 "memory_domains": [ 00:16:23.775 { 00:16:23.775 "dma_device_id": "system", 00:16:23.775 "dma_device_type": 1 00:16:23.775 }, 00:16:23.775 { 00:16:23.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.775 "dma_device_type": 2 00:16:23.775 } 00:16:23.775 ], 00:16:23.775 "driver_specific": {} 00:16:23.775 } 00:16:23.775 ] 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.775 09:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.775 "name": "Existed_Raid", 00:16:23.775 "uuid": "7f883381-5bba-4fd9-a672-efd7795b51c7", 00:16:23.775 "strip_size_kb": 64, 00:16:23.775 "state": "configuring", 00:16:23.775 "raid_level": "raid0", 00:16:23.775 "superblock": true, 00:16:23.775 "num_base_bdevs": 4, 00:16:23.775 "num_base_bdevs_discovered": 2, 00:16:23.775 "num_base_bdevs_operational": 4, 00:16:23.775 "base_bdevs_list": [ 00:16:23.775 { 00:16:23.775 "name": "BaseBdev1", 00:16:23.775 "uuid": "bc02c7c4-83a4-413c-8af2-f671c57690ca", 00:16:23.775 "is_configured": true, 00:16:23.775 "data_offset": 2048, 00:16:23.775 "data_size": 63488 00:16:23.775 }, 00:16:23.775 { 00:16:23.776 "name": "BaseBdev2", 00:16:23.776 "uuid": "c1b776bc-62c1-4da6-9eca-c19ae2d40cb1", 00:16:23.776 "is_configured": true, 00:16:23.776 "data_offset": 2048, 00:16:23.776 "data_size": 63488 00:16:23.776 }, 00:16:23.776 { 00:16:23.776 "name": "BaseBdev3", 00:16:23.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.776 "is_configured": false, 00:16:23.776 "data_offset": 0, 00:16:23.776 "data_size": 0 00:16:23.776 }, 00:16:23.776 { 00:16:23.776 "name": "BaseBdev4", 00:16:23.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.776 "is_configured": false, 00:16:23.776 "data_offset": 0, 00:16:23.776 "data_size": 0 00:16:23.776 } 00:16:23.776 ] 00:16:23.776 }' 00:16:23.776 09:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.776 09:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.343 [2024-10-15 09:16:08.062263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:24.343 BaseBdev3 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.343 [ 00:16:24.343 { 00:16:24.343 "name": "BaseBdev3", 00:16:24.343 "aliases": [ 00:16:24.343 "2b1850cc-bcc4-4dfa-a3b7-9a6254a0e6fc" 00:16:24.343 ], 00:16:24.343 "product_name": "Malloc disk", 00:16:24.343 "block_size": 512, 00:16:24.343 "num_blocks": 65536, 00:16:24.343 "uuid": "2b1850cc-bcc4-4dfa-a3b7-9a6254a0e6fc", 00:16:24.343 "assigned_rate_limits": { 00:16:24.343 "rw_ios_per_sec": 0, 00:16:24.343 "rw_mbytes_per_sec": 0, 00:16:24.343 "r_mbytes_per_sec": 0, 00:16:24.343 "w_mbytes_per_sec": 0 00:16:24.343 }, 00:16:24.343 "claimed": true, 00:16:24.343 "claim_type": "exclusive_write", 00:16:24.343 "zoned": false, 00:16:24.343 "supported_io_types": { 00:16:24.343 "read": true, 00:16:24.343 "write": true, 00:16:24.343 "unmap": true, 00:16:24.343 "flush": true, 00:16:24.343 "reset": true, 00:16:24.343 "nvme_admin": false, 00:16:24.343 "nvme_io": false, 00:16:24.343 "nvme_io_md": false, 00:16:24.343 "write_zeroes": true, 00:16:24.343 "zcopy": true, 00:16:24.343 "get_zone_info": false, 00:16:24.343 "zone_management": false, 00:16:24.343 "zone_append": false, 00:16:24.343 "compare": false, 00:16:24.343 "compare_and_write": false, 00:16:24.343 "abort": true, 00:16:24.343 "seek_hole": false, 00:16:24.343 "seek_data": false, 00:16:24.343 "copy": true, 00:16:24.343 "nvme_iov_md": false 00:16:24.343 }, 00:16:24.343 "memory_domains": [ 00:16:24.343 { 00:16:24.343 "dma_device_id": "system", 00:16:24.343 "dma_device_type": 1 00:16:24.343 }, 00:16:24.343 { 00:16:24.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.343 "dma_device_type": 2 00:16:24.343 } 00:16:24.343 ], 00:16:24.343 "driver_specific": {} 00:16:24.343 } 00:16:24.343 ] 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.343 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.343 "name": "Existed_Raid", 00:16:24.343 "uuid": "7f883381-5bba-4fd9-a672-efd7795b51c7", 00:16:24.343 "strip_size_kb": 64, 00:16:24.343 "state": "configuring", 00:16:24.343 "raid_level": "raid0", 00:16:24.343 "superblock": true, 00:16:24.343 "num_base_bdevs": 4, 00:16:24.343 "num_base_bdevs_discovered": 3, 00:16:24.343 "num_base_bdevs_operational": 4, 00:16:24.343 "base_bdevs_list": [ 00:16:24.343 { 00:16:24.343 "name": "BaseBdev1", 00:16:24.343 "uuid": "bc02c7c4-83a4-413c-8af2-f671c57690ca", 00:16:24.343 "is_configured": true, 00:16:24.343 "data_offset": 2048, 00:16:24.343 "data_size": 63488 00:16:24.343 }, 00:16:24.343 { 00:16:24.343 "name": "BaseBdev2", 00:16:24.343 "uuid": "c1b776bc-62c1-4da6-9eca-c19ae2d40cb1", 00:16:24.343 "is_configured": true, 00:16:24.343 "data_offset": 2048, 00:16:24.343 "data_size": 63488 00:16:24.343 }, 00:16:24.343 { 00:16:24.344 "name": "BaseBdev3", 00:16:24.344 "uuid": "2b1850cc-bcc4-4dfa-a3b7-9a6254a0e6fc", 00:16:24.344 "is_configured": true, 00:16:24.344 "data_offset": 2048, 00:16:24.344 "data_size": 63488 00:16:24.344 }, 00:16:24.344 { 00:16:24.344 "name": "BaseBdev4", 00:16:24.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.344 "is_configured": false, 00:16:24.344 "data_offset": 0, 00:16:24.344 "data_size": 0 00:16:24.344 } 00:16:24.344 ] 00:16:24.344 }' 00:16:24.344 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.344 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.911 [2024-10-15 09:16:08.654365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:24.911 [2024-10-15 09:16:08.654898] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:24.911 [2024-10-15 09:16:08.654925] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:24.911 [2024-10-15 09:16:08.655292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:24.911 BaseBdev4 00:16:24.911 [2024-10-15 09:16:08.655498] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:24.911 [2024-10-15 09:16:08.655522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:24.911 [2024-10-15 09:16:08.655716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.911 [ 00:16:24.911 { 00:16:24.911 "name": "BaseBdev4", 00:16:24.911 "aliases": [ 00:16:24.911 "0a0bc847-611c-42b7-a96f-9c9229c9b60d" 00:16:24.911 ], 00:16:24.911 "product_name": "Malloc disk", 00:16:24.911 "block_size": 512, 00:16:24.911 "num_blocks": 65536, 00:16:24.911 "uuid": "0a0bc847-611c-42b7-a96f-9c9229c9b60d", 00:16:24.911 "assigned_rate_limits": { 00:16:24.911 "rw_ios_per_sec": 0, 00:16:24.911 "rw_mbytes_per_sec": 0, 00:16:24.911 "r_mbytes_per_sec": 0, 00:16:24.911 "w_mbytes_per_sec": 0 00:16:24.911 }, 00:16:24.911 "claimed": true, 00:16:24.911 "claim_type": "exclusive_write", 00:16:24.911 "zoned": false, 00:16:24.911 "supported_io_types": { 00:16:24.911 "read": true, 00:16:24.911 "write": true, 00:16:24.911 "unmap": true, 00:16:24.911 "flush": true, 00:16:24.911 "reset": true, 00:16:24.911 "nvme_admin": false, 00:16:24.911 "nvme_io": false, 00:16:24.911 "nvme_io_md": false, 00:16:24.911 "write_zeroes": true, 00:16:24.911 "zcopy": true, 00:16:24.911 "get_zone_info": false, 00:16:24.911 "zone_management": false, 00:16:24.911 "zone_append": false, 00:16:24.911 "compare": false, 00:16:24.911 "compare_and_write": false, 00:16:24.911 "abort": true, 00:16:24.911 "seek_hole": false, 00:16:24.911 "seek_data": false, 00:16:24.911 "copy": true, 00:16:24.911 "nvme_iov_md": false 00:16:24.911 }, 00:16:24.911 "memory_domains": [ 00:16:24.911 { 00:16:24.911 "dma_device_id": "system", 00:16:24.911 "dma_device_type": 1 00:16:24.911 }, 00:16:24.911 { 00:16:24.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.911 "dma_device_type": 2 00:16:24.911 } 00:16:24.911 ], 00:16:24.911 "driver_specific": {} 00:16:24.911 } 00:16:24.911 ] 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.911 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.912 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.912 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.912 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.912 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.912 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.912 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.912 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.912 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.912 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.912 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.912 "name": "Existed_Raid", 00:16:24.912 "uuid": "7f883381-5bba-4fd9-a672-efd7795b51c7", 00:16:24.912 "strip_size_kb": 64, 00:16:24.912 "state": "online", 00:16:24.912 "raid_level": "raid0", 00:16:24.912 "superblock": true, 00:16:24.912 "num_base_bdevs": 4, 00:16:24.912 "num_base_bdevs_discovered": 4, 00:16:24.912 "num_base_bdevs_operational": 4, 00:16:24.912 "base_bdevs_list": [ 00:16:24.912 { 00:16:24.912 "name": "BaseBdev1", 00:16:24.912 "uuid": "bc02c7c4-83a4-413c-8af2-f671c57690ca", 00:16:24.912 "is_configured": true, 00:16:24.912 "data_offset": 2048, 00:16:24.912 "data_size": 63488 00:16:24.912 }, 00:16:24.912 { 00:16:24.912 "name": "BaseBdev2", 00:16:24.912 "uuid": "c1b776bc-62c1-4da6-9eca-c19ae2d40cb1", 00:16:24.912 "is_configured": true, 00:16:24.912 "data_offset": 2048, 00:16:24.912 "data_size": 63488 00:16:24.912 }, 00:16:24.912 { 00:16:24.912 "name": "BaseBdev3", 00:16:24.912 "uuid": "2b1850cc-bcc4-4dfa-a3b7-9a6254a0e6fc", 00:16:24.912 "is_configured": true, 00:16:24.912 "data_offset": 2048, 00:16:24.912 "data_size": 63488 00:16:24.912 }, 00:16:24.912 { 00:16:24.912 "name": "BaseBdev4", 00:16:24.912 "uuid": "0a0bc847-611c-42b7-a96f-9c9229c9b60d", 00:16:24.912 "is_configured": true, 00:16:24.912 "data_offset": 2048, 00:16:24.912 "data_size": 63488 00:16:24.912 } 00:16:24.912 ] 00:16:24.912 }' 00:16:24.912 09:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.912 09:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.480 [2024-10-15 09:16:09.215104] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:25.480 "name": "Existed_Raid", 00:16:25.480 "aliases": [ 00:16:25.480 "7f883381-5bba-4fd9-a672-efd7795b51c7" 00:16:25.480 ], 00:16:25.480 "product_name": "Raid Volume", 00:16:25.480 "block_size": 512, 00:16:25.480 "num_blocks": 253952, 00:16:25.480 "uuid": "7f883381-5bba-4fd9-a672-efd7795b51c7", 00:16:25.480 "assigned_rate_limits": { 00:16:25.480 "rw_ios_per_sec": 0, 00:16:25.480 "rw_mbytes_per_sec": 0, 00:16:25.480 "r_mbytes_per_sec": 0, 00:16:25.480 "w_mbytes_per_sec": 0 00:16:25.480 }, 00:16:25.480 "claimed": false, 00:16:25.480 "zoned": false, 00:16:25.480 "supported_io_types": { 00:16:25.480 "read": true, 00:16:25.480 "write": true, 00:16:25.480 "unmap": true, 00:16:25.480 "flush": true, 00:16:25.480 "reset": true, 00:16:25.480 "nvme_admin": false, 00:16:25.480 "nvme_io": false, 00:16:25.480 "nvme_io_md": false, 00:16:25.480 "write_zeroes": true, 00:16:25.480 "zcopy": false, 00:16:25.480 "get_zone_info": false, 00:16:25.480 "zone_management": false, 00:16:25.480 "zone_append": false, 00:16:25.480 "compare": false, 00:16:25.480 "compare_and_write": false, 00:16:25.480 "abort": false, 00:16:25.480 "seek_hole": false, 00:16:25.480 "seek_data": false, 00:16:25.480 "copy": false, 00:16:25.480 "nvme_iov_md": false 00:16:25.480 }, 00:16:25.480 "memory_domains": [ 00:16:25.480 { 00:16:25.480 "dma_device_id": "system", 00:16:25.480 "dma_device_type": 1 00:16:25.480 }, 00:16:25.480 { 00:16:25.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.480 "dma_device_type": 2 00:16:25.480 }, 00:16:25.480 { 00:16:25.480 "dma_device_id": "system", 00:16:25.480 "dma_device_type": 1 00:16:25.480 }, 00:16:25.480 { 00:16:25.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.480 "dma_device_type": 2 00:16:25.480 }, 00:16:25.480 { 00:16:25.480 "dma_device_id": "system", 00:16:25.480 "dma_device_type": 1 00:16:25.480 }, 00:16:25.480 { 00:16:25.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.480 "dma_device_type": 2 00:16:25.480 }, 00:16:25.480 { 00:16:25.480 "dma_device_id": "system", 00:16:25.480 "dma_device_type": 1 00:16:25.480 }, 00:16:25.480 { 00:16:25.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.480 "dma_device_type": 2 00:16:25.480 } 00:16:25.480 ], 00:16:25.480 "driver_specific": { 00:16:25.480 "raid": { 00:16:25.480 "uuid": "7f883381-5bba-4fd9-a672-efd7795b51c7", 00:16:25.480 "strip_size_kb": 64, 00:16:25.480 "state": "online", 00:16:25.480 "raid_level": "raid0", 00:16:25.480 "superblock": true, 00:16:25.480 "num_base_bdevs": 4, 00:16:25.480 "num_base_bdevs_discovered": 4, 00:16:25.480 "num_base_bdevs_operational": 4, 00:16:25.480 "base_bdevs_list": [ 00:16:25.480 { 00:16:25.480 "name": "BaseBdev1", 00:16:25.480 "uuid": "bc02c7c4-83a4-413c-8af2-f671c57690ca", 00:16:25.480 "is_configured": true, 00:16:25.480 "data_offset": 2048, 00:16:25.480 "data_size": 63488 00:16:25.480 }, 00:16:25.480 { 00:16:25.480 "name": "BaseBdev2", 00:16:25.480 "uuid": "c1b776bc-62c1-4da6-9eca-c19ae2d40cb1", 00:16:25.480 "is_configured": true, 00:16:25.480 "data_offset": 2048, 00:16:25.480 "data_size": 63488 00:16:25.480 }, 00:16:25.480 { 00:16:25.480 "name": "BaseBdev3", 00:16:25.480 "uuid": "2b1850cc-bcc4-4dfa-a3b7-9a6254a0e6fc", 00:16:25.480 "is_configured": true, 00:16:25.480 "data_offset": 2048, 00:16:25.480 "data_size": 63488 00:16:25.480 }, 00:16:25.480 { 00:16:25.480 "name": "BaseBdev4", 00:16:25.480 "uuid": "0a0bc847-611c-42b7-a96f-9c9229c9b60d", 00:16:25.480 "is_configured": true, 00:16:25.480 "data_offset": 2048, 00:16:25.480 "data_size": 63488 00:16:25.480 } 00:16:25.480 ] 00:16:25.480 } 00:16:25.480 } 00:16:25.480 }' 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:25.480 BaseBdev2 00:16:25.480 BaseBdev3 00:16:25.480 BaseBdev4' 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.480 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.740 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.740 [2024-10-15 09:16:09.582907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:25.740 [2024-10-15 09:16:09.583075] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.740 [2024-10-15 09:16:09.583197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.999 "name": "Existed_Raid", 00:16:25.999 "uuid": "7f883381-5bba-4fd9-a672-efd7795b51c7", 00:16:25.999 "strip_size_kb": 64, 00:16:25.999 "state": "offline", 00:16:25.999 "raid_level": "raid0", 00:16:25.999 "superblock": true, 00:16:25.999 "num_base_bdevs": 4, 00:16:25.999 "num_base_bdevs_discovered": 3, 00:16:25.999 "num_base_bdevs_operational": 3, 00:16:25.999 "base_bdevs_list": [ 00:16:25.999 { 00:16:25.999 "name": null, 00:16:25.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.999 "is_configured": false, 00:16:25.999 "data_offset": 0, 00:16:25.999 "data_size": 63488 00:16:25.999 }, 00:16:25.999 { 00:16:25.999 "name": "BaseBdev2", 00:16:25.999 "uuid": "c1b776bc-62c1-4da6-9eca-c19ae2d40cb1", 00:16:25.999 "is_configured": true, 00:16:25.999 "data_offset": 2048, 00:16:25.999 "data_size": 63488 00:16:25.999 }, 00:16:25.999 { 00:16:25.999 "name": "BaseBdev3", 00:16:25.999 "uuid": "2b1850cc-bcc4-4dfa-a3b7-9a6254a0e6fc", 00:16:25.999 "is_configured": true, 00:16:25.999 "data_offset": 2048, 00:16:25.999 "data_size": 63488 00:16:25.999 }, 00:16:25.999 { 00:16:25.999 "name": "BaseBdev4", 00:16:25.999 "uuid": "0a0bc847-611c-42b7-a96f-9c9229c9b60d", 00:16:25.999 "is_configured": true, 00:16:25.999 "data_offset": 2048, 00:16:25.999 "data_size": 63488 00:16:25.999 } 00:16:25.999 ] 00:16:25.999 }' 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.999 09:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.566 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:26.566 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:26.566 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.566 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:26.566 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.566 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.566 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.566 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:26.566 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:26.566 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:26.566 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.566 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.566 [2024-10-15 09:16:10.261683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:26.566 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.566 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:26.566 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:26.566 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.566 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.567 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:26.567 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.567 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.567 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:26.567 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:26.567 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:26.567 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.567 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.567 [2024-10-15 09:16:10.413310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.825 [2024-10-15 09:16:10.575480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:26.825 [2024-10-15 09:16:10.575572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.825 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.084 BaseBdev2 00:16:27.084 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.084 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:27.084 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:27.084 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.085 [ 00:16:27.085 { 00:16:27.085 "name": "BaseBdev2", 00:16:27.085 "aliases": [ 00:16:27.085 "6a40cd82-bf63-4529-acf2-7668905cfdc7" 00:16:27.085 ], 00:16:27.085 "product_name": "Malloc disk", 00:16:27.085 "block_size": 512, 00:16:27.085 "num_blocks": 65536, 00:16:27.085 "uuid": "6a40cd82-bf63-4529-acf2-7668905cfdc7", 00:16:27.085 "assigned_rate_limits": { 00:16:27.085 "rw_ios_per_sec": 0, 00:16:27.085 "rw_mbytes_per_sec": 0, 00:16:27.085 "r_mbytes_per_sec": 0, 00:16:27.085 "w_mbytes_per_sec": 0 00:16:27.085 }, 00:16:27.085 "claimed": false, 00:16:27.085 "zoned": false, 00:16:27.085 "supported_io_types": { 00:16:27.085 "read": true, 00:16:27.085 "write": true, 00:16:27.085 "unmap": true, 00:16:27.085 "flush": true, 00:16:27.085 "reset": true, 00:16:27.085 "nvme_admin": false, 00:16:27.085 "nvme_io": false, 00:16:27.085 "nvme_io_md": false, 00:16:27.085 "write_zeroes": true, 00:16:27.085 "zcopy": true, 00:16:27.085 "get_zone_info": false, 00:16:27.085 "zone_management": false, 00:16:27.085 "zone_append": false, 00:16:27.085 "compare": false, 00:16:27.085 "compare_and_write": false, 00:16:27.085 "abort": true, 00:16:27.085 "seek_hole": false, 00:16:27.085 "seek_data": false, 00:16:27.085 "copy": true, 00:16:27.085 "nvme_iov_md": false 00:16:27.085 }, 00:16:27.085 "memory_domains": [ 00:16:27.085 { 00:16:27.085 "dma_device_id": "system", 00:16:27.085 "dma_device_type": 1 00:16:27.085 }, 00:16:27.085 { 00:16:27.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.085 "dma_device_type": 2 00:16:27.085 } 00:16:27.085 ], 00:16:27.085 "driver_specific": {} 00:16:27.085 } 00:16:27.085 ] 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.085 BaseBdev3 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.085 [ 00:16:27.085 { 00:16:27.085 "name": "BaseBdev3", 00:16:27.085 "aliases": [ 00:16:27.085 "bf3cfbb6-79e6-465f-92e6-0867f6cdf7b8" 00:16:27.085 ], 00:16:27.085 "product_name": "Malloc disk", 00:16:27.085 "block_size": 512, 00:16:27.085 "num_blocks": 65536, 00:16:27.085 "uuid": "bf3cfbb6-79e6-465f-92e6-0867f6cdf7b8", 00:16:27.085 "assigned_rate_limits": { 00:16:27.085 "rw_ios_per_sec": 0, 00:16:27.085 "rw_mbytes_per_sec": 0, 00:16:27.085 "r_mbytes_per_sec": 0, 00:16:27.085 "w_mbytes_per_sec": 0 00:16:27.085 }, 00:16:27.085 "claimed": false, 00:16:27.085 "zoned": false, 00:16:27.085 "supported_io_types": { 00:16:27.085 "read": true, 00:16:27.085 "write": true, 00:16:27.085 "unmap": true, 00:16:27.085 "flush": true, 00:16:27.085 "reset": true, 00:16:27.085 "nvme_admin": false, 00:16:27.085 "nvme_io": false, 00:16:27.085 "nvme_io_md": false, 00:16:27.085 "write_zeroes": true, 00:16:27.085 "zcopy": true, 00:16:27.085 "get_zone_info": false, 00:16:27.085 "zone_management": false, 00:16:27.085 "zone_append": false, 00:16:27.085 "compare": false, 00:16:27.085 "compare_and_write": false, 00:16:27.085 "abort": true, 00:16:27.085 "seek_hole": false, 00:16:27.085 "seek_data": false, 00:16:27.085 "copy": true, 00:16:27.085 "nvme_iov_md": false 00:16:27.085 }, 00:16:27.085 "memory_domains": [ 00:16:27.085 { 00:16:27.085 "dma_device_id": "system", 00:16:27.085 "dma_device_type": 1 00:16:27.085 }, 00:16:27.085 { 00:16:27.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.085 "dma_device_type": 2 00:16:27.085 } 00:16:27.085 ], 00:16:27.085 "driver_specific": {} 00:16:27.085 } 00:16:27.085 ] 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.085 BaseBdev4 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.085 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.086 [ 00:16:27.086 { 00:16:27.086 "name": "BaseBdev4", 00:16:27.086 "aliases": [ 00:16:27.086 "3535c8b3-f58d-4d72-b9e8-3ce271a83d13" 00:16:27.086 ], 00:16:27.086 "product_name": "Malloc disk", 00:16:27.086 "block_size": 512, 00:16:27.086 "num_blocks": 65536, 00:16:27.086 "uuid": "3535c8b3-f58d-4d72-b9e8-3ce271a83d13", 00:16:27.086 "assigned_rate_limits": { 00:16:27.086 "rw_ios_per_sec": 0, 00:16:27.086 "rw_mbytes_per_sec": 0, 00:16:27.086 "r_mbytes_per_sec": 0, 00:16:27.086 "w_mbytes_per_sec": 0 00:16:27.086 }, 00:16:27.086 "claimed": false, 00:16:27.086 "zoned": false, 00:16:27.086 "supported_io_types": { 00:16:27.086 "read": true, 00:16:27.086 "write": true, 00:16:27.086 "unmap": true, 00:16:27.086 "flush": true, 00:16:27.086 "reset": true, 00:16:27.086 "nvme_admin": false, 00:16:27.086 "nvme_io": false, 00:16:27.086 "nvme_io_md": false, 00:16:27.086 "write_zeroes": true, 00:16:27.086 "zcopy": true, 00:16:27.086 "get_zone_info": false, 00:16:27.086 "zone_management": false, 00:16:27.086 "zone_append": false, 00:16:27.086 "compare": false, 00:16:27.086 "compare_and_write": false, 00:16:27.086 "abort": true, 00:16:27.086 "seek_hole": false, 00:16:27.086 "seek_data": false, 00:16:27.086 "copy": true, 00:16:27.086 "nvme_iov_md": false 00:16:27.086 }, 00:16:27.086 "memory_domains": [ 00:16:27.086 { 00:16:27.086 "dma_device_id": "system", 00:16:27.086 "dma_device_type": 1 00:16:27.086 }, 00:16:27.086 { 00:16:27.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.086 "dma_device_type": 2 00:16:27.086 } 00:16:27.086 ], 00:16:27.086 "driver_specific": {} 00:16:27.086 } 00:16:27.086 ] 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.086 [2024-10-15 09:16:10.956350] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:27.086 [2024-10-15 09:16:10.956541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:27.086 [2024-10-15 09:16:10.956688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:27.086 [2024-10-15 09:16:10.959562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:27.086 [2024-10-15 09:16:10.959627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.086 09:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.345 09:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.345 "name": "Existed_Raid", 00:16:27.345 "uuid": "eaaedd3e-099b-46b0-9fd8-dc7b0630de94", 00:16:27.345 "strip_size_kb": 64, 00:16:27.345 "state": "configuring", 00:16:27.345 "raid_level": "raid0", 00:16:27.345 "superblock": true, 00:16:27.345 "num_base_bdevs": 4, 00:16:27.345 "num_base_bdevs_discovered": 3, 00:16:27.345 "num_base_bdevs_operational": 4, 00:16:27.345 "base_bdevs_list": [ 00:16:27.345 { 00:16:27.345 "name": "BaseBdev1", 00:16:27.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.345 "is_configured": false, 00:16:27.345 "data_offset": 0, 00:16:27.345 "data_size": 0 00:16:27.345 }, 00:16:27.345 { 00:16:27.345 "name": "BaseBdev2", 00:16:27.345 "uuid": "6a40cd82-bf63-4529-acf2-7668905cfdc7", 00:16:27.345 "is_configured": true, 00:16:27.345 "data_offset": 2048, 00:16:27.345 "data_size": 63488 00:16:27.345 }, 00:16:27.345 { 00:16:27.345 "name": "BaseBdev3", 00:16:27.345 "uuid": "bf3cfbb6-79e6-465f-92e6-0867f6cdf7b8", 00:16:27.345 "is_configured": true, 00:16:27.345 "data_offset": 2048, 00:16:27.345 "data_size": 63488 00:16:27.345 }, 00:16:27.345 { 00:16:27.345 "name": "BaseBdev4", 00:16:27.345 "uuid": "3535c8b3-f58d-4d72-b9e8-3ce271a83d13", 00:16:27.345 "is_configured": true, 00:16:27.345 "data_offset": 2048, 00:16:27.345 "data_size": 63488 00:16:27.345 } 00:16:27.345 ] 00:16:27.345 }' 00:16:27.345 09:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.345 09:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.604 09:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:27.604 09:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.604 09:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.604 [2024-10-15 09:16:11.508631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:27.604 09:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.604 09:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:27.604 09:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.604 09:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.604 09:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:27.604 09:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.604 09:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:27.604 09:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.604 09:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.604 09:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.604 09:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.604 09:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.604 09:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.604 09:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.604 09:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.863 09:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.863 09:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.863 "name": "Existed_Raid", 00:16:27.863 "uuid": "eaaedd3e-099b-46b0-9fd8-dc7b0630de94", 00:16:27.863 "strip_size_kb": 64, 00:16:27.863 "state": "configuring", 00:16:27.863 "raid_level": "raid0", 00:16:27.863 "superblock": true, 00:16:27.863 "num_base_bdevs": 4, 00:16:27.863 "num_base_bdevs_discovered": 2, 00:16:27.863 "num_base_bdevs_operational": 4, 00:16:27.863 "base_bdevs_list": [ 00:16:27.863 { 00:16:27.863 "name": "BaseBdev1", 00:16:27.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.863 "is_configured": false, 00:16:27.863 "data_offset": 0, 00:16:27.863 "data_size": 0 00:16:27.863 }, 00:16:27.863 { 00:16:27.863 "name": null, 00:16:27.863 "uuid": "6a40cd82-bf63-4529-acf2-7668905cfdc7", 00:16:27.863 "is_configured": false, 00:16:27.863 "data_offset": 0, 00:16:27.863 "data_size": 63488 00:16:27.863 }, 00:16:27.863 { 00:16:27.863 "name": "BaseBdev3", 00:16:27.863 "uuid": "bf3cfbb6-79e6-465f-92e6-0867f6cdf7b8", 00:16:27.863 "is_configured": true, 00:16:27.863 "data_offset": 2048, 00:16:27.863 "data_size": 63488 00:16:27.863 }, 00:16:27.864 { 00:16:27.864 "name": "BaseBdev4", 00:16:27.864 "uuid": "3535c8b3-f58d-4d72-b9e8-3ce271a83d13", 00:16:27.864 "is_configured": true, 00:16:27.864 "data_offset": 2048, 00:16:27.864 "data_size": 63488 00:16:27.864 } 00:16:27.864 ] 00:16:27.864 }' 00:16:27.864 09:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.864 09:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.431 [2024-10-15 09:16:12.156709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:28.431 BaseBdev1 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.431 [ 00:16:28.431 { 00:16:28.431 "name": "BaseBdev1", 00:16:28.431 "aliases": [ 00:16:28.431 "9fa94785-e702-451c-982c-80afc340e257" 00:16:28.431 ], 00:16:28.431 "product_name": "Malloc disk", 00:16:28.431 "block_size": 512, 00:16:28.431 "num_blocks": 65536, 00:16:28.431 "uuid": "9fa94785-e702-451c-982c-80afc340e257", 00:16:28.431 "assigned_rate_limits": { 00:16:28.431 "rw_ios_per_sec": 0, 00:16:28.431 "rw_mbytes_per_sec": 0, 00:16:28.431 "r_mbytes_per_sec": 0, 00:16:28.431 "w_mbytes_per_sec": 0 00:16:28.431 }, 00:16:28.431 "claimed": true, 00:16:28.431 "claim_type": "exclusive_write", 00:16:28.431 "zoned": false, 00:16:28.431 "supported_io_types": { 00:16:28.431 "read": true, 00:16:28.431 "write": true, 00:16:28.431 "unmap": true, 00:16:28.431 "flush": true, 00:16:28.431 "reset": true, 00:16:28.431 "nvme_admin": false, 00:16:28.431 "nvme_io": false, 00:16:28.431 "nvme_io_md": false, 00:16:28.431 "write_zeroes": true, 00:16:28.431 "zcopy": true, 00:16:28.431 "get_zone_info": false, 00:16:28.431 "zone_management": false, 00:16:28.431 "zone_append": false, 00:16:28.431 "compare": false, 00:16:28.431 "compare_and_write": false, 00:16:28.431 "abort": true, 00:16:28.431 "seek_hole": false, 00:16:28.431 "seek_data": false, 00:16:28.431 "copy": true, 00:16:28.431 "nvme_iov_md": false 00:16:28.431 }, 00:16:28.431 "memory_domains": [ 00:16:28.431 { 00:16:28.431 "dma_device_id": "system", 00:16:28.431 "dma_device_type": 1 00:16:28.431 }, 00:16:28.431 { 00:16:28.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.431 "dma_device_type": 2 00:16:28.431 } 00:16:28.431 ], 00:16:28.431 "driver_specific": {} 00:16:28.431 } 00:16:28.431 ] 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.431 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.432 "name": "Existed_Raid", 00:16:28.432 "uuid": "eaaedd3e-099b-46b0-9fd8-dc7b0630de94", 00:16:28.432 "strip_size_kb": 64, 00:16:28.432 "state": "configuring", 00:16:28.432 "raid_level": "raid0", 00:16:28.432 "superblock": true, 00:16:28.432 "num_base_bdevs": 4, 00:16:28.432 "num_base_bdevs_discovered": 3, 00:16:28.432 "num_base_bdevs_operational": 4, 00:16:28.432 "base_bdevs_list": [ 00:16:28.432 { 00:16:28.432 "name": "BaseBdev1", 00:16:28.432 "uuid": "9fa94785-e702-451c-982c-80afc340e257", 00:16:28.432 "is_configured": true, 00:16:28.432 "data_offset": 2048, 00:16:28.432 "data_size": 63488 00:16:28.432 }, 00:16:28.432 { 00:16:28.432 "name": null, 00:16:28.432 "uuid": "6a40cd82-bf63-4529-acf2-7668905cfdc7", 00:16:28.432 "is_configured": false, 00:16:28.432 "data_offset": 0, 00:16:28.432 "data_size": 63488 00:16:28.432 }, 00:16:28.432 { 00:16:28.432 "name": "BaseBdev3", 00:16:28.432 "uuid": "bf3cfbb6-79e6-465f-92e6-0867f6cdf7b8", 00:16:28.432 "is_configured": true, 00:16:28.432 "data_offset": 2048, 00:16:28.432 "data_size": 63488 00:16:28.432 }, 00:16:28.432 { 00:16:28.432 "name": "BaseBdev4", 00:16:28.432 "uuid": "3535c8b3-f58d-4d72-b9e8-3ce271a83d13", 00:16:28.432 "is_configured": true, 00:16:28.432 "data_offset": 2048, 00:16:28.432 "data_size": 63488 00:16:28.432 } 00:16:28.432 ] 00:16:28.432 }' 00:16:28.432 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.432 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.999 [2024-10-15 09:16:12.776981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.999 "name": "Existed_Raid", 00:16:28.999 "uuid": "eaaedd3e-099b-46b0-9fd8-dc7b0630de94", 00:16:28.999 "strip_size_kb": 64, 00:16:28.999 "state": "configuring", 00:16:28.999 "raid_level": "raid0", 00:16:28.999 "superblock": true, 00:16:28.999 "num_base_bdevs": 4, 00:16:28.999 "num_base_bdevs_discovered": 2, 00:16:28.999 "num_base_bdevs_operational": 4, 00:16:28.999 "base_bdevs_list": [ 00:16:28.999 { 00:16:28.999 "name": "BaseBdev1", 00:16:28.999 "uuid": "9fa94785-e702-451c-982c-80afc340e257", 00:16:28.999 "is_configured": true, 00:16:28.999 "data_offset": 2048, 00:16:28.999 "data_size": 63488 00:16:28.999 }, 00:16:28.999 { 00:16:28.999 "name": null, 00:16:28.999 "uuid": "6a40cd82-bf63-4529-acf2-7668905cfdc7", 00:16:28.999 "is_configured": false, 00:16:28.999 "data_offset": 0, 00:16:28.999 "data_size": 63488 00:16:28.999 }, 00:16:28.999 { 00:16:28.999 "name": null, 00:16:28.999 "uuid": "bf3cfbb6-79e6-465f-92e6-0867f6cdf7b8", 00:16:28.999 "is_configured": false, 00:16:28.999 "data_offset": 0, 00:16:28.999 "data_size": 63488 00:16:28.999 }, 00:16:28.999 { 00:16:28.999 "name": "BaseBdev4", 00:16:28.999 "uuid": "3535c8b3-f58d-4d72-b9e8-3ce271a83d13", 00:16:28.999 "is_configured": true, 00:16:28.999 "data_offset": 2048, 00:16:28.999 "data_size": 63488 00:16:28.999 } 00:16:28.999 ] 00:16:28.999 }' 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.999 09:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.566 [2024-10-15 09:16:13.369236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.566 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.566 "name": "Existed_Raid", 00:16:29.566 "uuid": "eaaedd3e-099b-46b0-9fd8-dc7b0630de94", 00:16:29.566 "strip_size_kb": 64, 00:16:29.566 "state": "configuring", 00:16:29.566 "raid_level": "raid0", 00:16:29.566 "superblock": true, 00:16:29.566 "num_base_bdevs": 4, 00:16:29.566 "num_base_bdevs_discovered": 3, 00:16:29.566 "num_base_bdevs_operational": 4, 00:16:29.566 "base_bdevs_list": [ 00:16:29.566 { 00:16:29.566 "name": "BaseBdev1", 00:16:29.566 "uuid": "9fa94785-e702-451c-982c-80afc340e257", 00:16:29.566 "is_configured": true, 00:16:29.566 "data_offset": 2048, 00:16:29.566 "data_size": 63488 00:16:29.566 }, 00:16:29.566 { 00:16:29.566 "name": null, 00:16:29.566 "uuid": "6a40cd82-bf63-4529-acf2-7668905cfdc7", 00:16:29.566 "is_configured": false, 00:16:29.566 "data_offset": 0, 00:16:29.566 "data_size": 63488 00:16:29.566 }, 00:16:29.566 { 00:16:29.567 "name": "BaseBdev3", 00:16:29.567 "uuid": "bf3cfbb6-79e6-465f-92e6-0867f6cdf7b8", 00:16:29.567 "is_configured": true, 00:16:29.567 "data_offset": 2048, 00:16:29.567 "data_size": 63488 00:16:29.567 }, 00:16:29.567 { 00:16:29.567 "name": "BaseBdev4", 00:16:29.567 "uuid": "3535c8b3-f58d-4d72-b9e8-3ce271a83d13", 00:16:29.567 "is_configured": true, 00:16:29.567 "data_offset": 2048, 00:16:29.567 "data_size": 63488 00:16:29.567 } 00:16:29.567 ] 00:16:29.567 }' 00:16:29.567 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.567 09:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.134 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.134 09:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.134 09:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.134 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:30.134 09:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.134 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:30.134 09:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:30.134 09:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.134 09:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.134 [2024-10-15 09:16:13.969490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.393 "name": "Existed_Raid", 00:16:30.393 "uuid": "eaaedd3e-099b-46b0-9fd8-dc7b0630de94", 00:16:30.393 "strip_size_kb": 64, 00:16:30.393 "state": "configuring", 00:16:30.393 "raid_level": "raid0", 00:16:30.393 "superblock": true, 00:16:30.393 "num_base_bdevs": 4, 00:16:30.393 "num_base_bdevs_discovered": 2, 00:16:30.393 "num_base_bdevs_operational": 4, 00:16:30.393 "base_bdevs_list": [ 00:16:30.393 { 00:16:30.393 "name": null, 00:16:30.393 "uuid": "9fa94785-e702-451c-982c-80afc340e257", 00:16:30.393 "is_configured": false, 00:16:30.393 "data_offset": 0, 00:16:30.393 "data_size": 63488 00:16:30.393 }, 00:16:30.393 { 00:16:30.393 "name": null, 00:16:30.393 "uuid": "6a40cd82-bf63-4529-acf2-7668905cfdc7", 00:16:30.393 "is_configured": false, 00:16:30.393 "data_offset": 0, 00:16:30.393 "data_size": 63488 00:16:30.393 }, 00:16:30.393 { 00:16:30.393 "name": "BaseBdev3", 00:16:30.393 "uuid": "bf3cfbb6-79e6-465f-92e6-0867f6cdf7b8", 00:16:30.393 "is_configured": true, 00:16:30.393 "data_offset": 2048, 00:16:30.393 "data_size": 63488 00:16:30.393 }, 00:16:30.393 { 00:16:30.393 "name": "BaseBdev4", 00:16:30.393 "uuid": "3535c8b3-f58d-4d72-b9e8-3ce271a83d13", 00:16:30.393 "is_configured": true, 00:16:30.393 "data_offset": 2048, 00:16:30.393 "data_size": 63488 00:16:30.393 } 00:16:30.393 ] 00:16:30.393 }' 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.393 09:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.960 [2024-10-15 09:16:14.670105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.960 "name": "Existed_Raid", 00:16:30.960 "uuid": "eaaedd3e-099b-46b0-9fd8-dc7b0630de94", 00:16:30.960 "strip_size_kb": 64, 00:16:30.960 "state": "configuring", 00:16:30.960 "raid_level": "raid0", 00:16:30.960 "superblock": true, 00:16:30.960 "num_base_bdevs": 4, 00:16:30.960 "num_base_bdevs_discovered": 3, 00:16:30.960 "num_base_bdevs_operational": 4, 00:16:30.960 "base_bdevs_list": [ 00:16:30.960 { 00:16:30.960 "name": null, 00:16:30.960 "uuid": "9fa94785-e702-451c-982c-80afc340e257", 00:16:30.960 "is_configured": false, 00:16:30.960 "data_offset": 0, 00:16:30.960 "data_size": 63488 00:16:30.960 }, 00:16:30.960 { 00:16:30.960 "name": "BaseBdev2", 00:16:30.960 "uuid": "6a40cd82-bf63-4529-acf2-7668905cfdc7", 00:16:30.960 "is_configured": true, 00:16:30.960 "data_offset": 2048, 00:16:30.960 "data_size": 63488 00:16:30.960 }, 00:16:30.960 { 00:16:30.960 "name": "BaseBdev3", 00:16:30.960 "uuid": "bf3cfbb6-79e6-465f-92e6-0867f6cdf7b8", 00:16:30.960 "is_configured": true, 00:16:30.960 "data_offset": 2048, 00:16:30.960 "data_size": 63488 00:16:30.960 }, 00:16:30.960 { 00:16:30.960 "name": "BaseBdev4", 00:16:30.960 "uuid": "3535c8b3-f58d-4d72-b9e8-3ce271a83d13", 00:16:30.960 "is_configured": true, 00:16:30.960 "data_offset": 2048, 00:16:30.960 "data_size": 63488 00:16:30.960 } 00:16:30.960 ] 00:16:30.960 }' 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.960 09:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9fa94785-e702-451c-982c-80afc340e257 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.528 [2024-10-15 09:16:15.339665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:31.528 [2024-10-15 09:16:15.340008] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:31.528 [2024-10-15 09:16:15.340027] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:31.528 [2024-10-15 09:16:15.340398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:31.528 NewBaseBdev 00:16:31.528 [2024-10-15 09:16:15.340585] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:31.528 [2024-10-15 09:16:15.340606] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:31.528 [2024-10-15 09:16:15.340770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.528 [ 00:16:31.528 { 00:16:31.528 "name": "NewBaseBdev", 00:16:31.528 "aliases": [ 00:16:31.528 "9fa94785-e702-451c-982c-80afc340e257" 00:16:31.528 ], 00:16:31.528 "product_name": "Malloc disk", 00:16:31.528 "block_size": 512, 00:16:31.528 "num_blocks": 65536, 00:16:31.528 "uuid": "9fa94785-e702-451c-982c-80afc340e257", 00:16:31.528 "assigned_rate_limits": { 00:16:31.528 "rw_ios_per_sec": 0, 00:16:31.528 "rw_mbytes_per_sec": 0, 00:16:31.528 "r_mbytes_per_sec": 0, 00:16:31.528 "w_mbytes_per_sec": 0 00:16:31.528 }, 00:16:31.528 "claimed": true, 00:16:31.528 "claim_type": "exclusive_write", 00:16:31.528 "zoned": false, 00:16:31.528 "supported_io_types": { 00:16:31.528 "read": true, 00:16:31.528 "write": true, 00:16:31.528 "unmap": true, 00:16:31.528 "flush": true, 00:16:31.528 "reset": true, 00:16:31.528 "nvme_admin": false, 00:16:31.528 "nvme_io": false, 00:16:31.528 "nvme_io_md": false, 00:16:31.528 "write_zeroes": true, 00:16:31.528 "zcopy": true, 00:16:31.528 "get_zone_info": false, 00:16:31.528 "zone_management": false, 00:16:31.528 "zone_append": false, 00:16:31.528 "compare": false, 00:16:31.528 "compare_and_write": false, 00:16:31.528 "abort": true, 00:16:31.528 "seek_hole": false, 00:16:31.528 "seek_data": false, 00:16:31.528 "copy": true, 00:16:31.528 "nvme_iov_md": false 00:16:31.528 }, 00:16:31.528 "memory_domains": [ 00:16:31.528 { 00:16:31.528 "dma_device_id": "system", 00:16:31.528 "dma_device_type": 1 00:16:31.528 }, 00:16:31.528 { 00:16:31.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.528 "dma_device_type": 2 00:16:31.528 } 00:16:31.528 ], 00:16:31.528 "driver_specific": {} 00:16:31.528 } 00:16:31.528 ] 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.528 "name": "Existed_Raid", 00:16:31.528 "uuid": "eaaedd3e-099b-46b0-9fd8-dc7b0630de94", 00:16:31.528 "strip_size_kb": 64, 00:16:31.528 "state": "online", 00:16:31.528 "raid_level": "raid0", 00:16:31.528 "superblock": true, 00:16:31.528 "num_base_bdevs": 4, 00:16:31.528 "num_base_bdevs_discovered": 4, 00:16:31.528 "num_base_bdevs_operational": 4, 00:16:31.528 "base_bdevs_list": [ 00:16:31.528 { 00:16:31.528 "name": "NewBaseBdev", 00:16:31.528 "uuid": "9fa94785-e702-451c-982c-80afc340e257", 00:16:31.528 "is_configured": true, 00:16:31.528 "data_offset": 2048, 00:16:31.528 "data_size": 63488 00:16:31.528 }, 00:16:31.528 { 00:16:31.528 "name": "BaseBdev2", 00:16:31.528 "uuid": "6a40cd82-bf63-4529-acf2-7668905cfdc7", 00:16:31.528 "is_configured": true, 00:16:31.528 "data_offset": 2048, 00:16:31.528 "data_size": 63488 00:16:31.528 }, 00:16:31.528 { 00:16:31.528 "name": "BaseBdev3", 00:16:31.528 "uuid": "bf3cfbb6-79e6-465f-92e6-0867f6cdf7b8", 00:16:31.528 "is_configured": true, 00:16:31.528 "data_offset": 2048, 00:16:31.528 "data_size": 63488 00:16:31.528 }, 00:16:31.528 { 00:16:31.528 "name": "BaseBdev4", 00:16:31.528 "uuid": "3535c8b3-f58d-4d72-b9e8-3ce271a83d13", 00:16:31.528 "is_configured": true, 00:16:31.528 "data_offset": 2048, 00:16:31.528 "data_size": 63488 00:16:31.528 } 00:16:31.528 ] 00:16:31.528 }' 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.528 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.095 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:32.095 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:32.095 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:32.095 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:32.095 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:32.095 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:32.095 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:32.095 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:32.095 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.095 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.095 [2024-10-15 09:16:15.900383] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.095 09:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.095 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:32.095 "name": "Existed_Raid", 00:16:32.095 "aliases": [ 00:16:32.095 "eaaedd3e-099b-46b0-9fd8-dc7b0630de94" 00:16:32.095 ], 00:16:32.095 "product_name": "Raid Volume", 00:16:32.095 "block_size": 512, 00:16:32.095 "num_blocks": 253952, 00:16:32.096 "uuid": "eaaedd3e-099b-46b0-9fd8-dc7b0630de94", 00:16:32.096 "assigned_rate_limits": { 00:16:32.096 "rw_ios_per_sec": 0, 00:16:32.096 "rw_mbytes_per_sec": 0, 00:16:32.096 "r_mbytes_per_sec": 0, 00:16:32.096 "w_mbytes_per_sec": 0 00:16:32.096 }, 00:16:32.096 "claimed": false, 00:16:32.096 "zoned": false, 00:16:32.096 "supported_io_types": { 00:16:32.096 "read": true, 00:16:32.096 "write": true, 00:16:32.096 "unmap": true, 00:16:32.096 "flush": true, 00:16:32.096 "reset": true, 00:16:32.096 "nvme_admin": false, 00:16:32.096 "nvme_io": false, 00:16:32.096 "nvme_io_md": false, 00:16:32.096 "write_zeroes": true, 00:16:32.096 "zcopy": false, 00:16:32.096 "get_zone_info": false, 00:16:32.096 "zone_management": false, 00:16:32.096 "zone_append": false, 00:16:32.096 "compare": false, 00:16:32.096 "compare_and_write": false, 00:16:32.096 "abort": false, 00:16:32.096 "seek_hole": false, 00:16:32.096 "seek_data": false, 00:16:32.096 "copy": false, 00:16:32.096 "nvme_iov_md": false 00:16:32.096 }, 00:16:32.096 "memory_domains": [ 00:16:32.096 { 00:16:32.096 "dma_device_id": "system", 00:16:32.096 "dma_device_type": 1 00:16:32.096 }, 00:16:32.096 { 00:16:32.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.096 "dma_device_type": 2 00:16:32.096 }, 00:16:32.096 { 00:16:32.096 "dma_device_id": "system", 00:16:32.096 "dma_device_type": 1 00:16:32.096 }, 00:16:32.096 { 00:16:32.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.096 "dma_device_type": 2 00:16:32.096 }, 00:16:32.096 { 00:16:32.096 "dma_device_id": "system", 00:16:32.096 "dma_device_type": 1 00:16:32.096 }, 00:16:32.096 { 00:16:32.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.096 "dma_device_type": 2 00:16:32.096 }, 00:16:32.096 { 00:16:32.096 "dma_device_id": "system", 00:16:32.096 "dma_device_type": 1 00:16:32.096 }, 00:16:32.096 { 00:16:32.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.096 "dma_device_type": 2 00:16:32.096 } 00:16:32.096 ], 00:16:32.096 "driver_specific": { 00:16:32.096 "raid": { 00:16:32.096 "uuid": "eaaedd3e-099b-46b0-9fd8-dc7b0630de94", 00:16:32.096 "strip_size_kb": 64, 00:16:32.096 "state": "online", 00:16:32.096 "raid_level": "raid0", 00:16:32.096 "superblock": true, 00:16:32.096 "num_base_bdevs": 4, 00:16:32.096 "num_base_bdevs_discovered": 4, 00:16:32.096 "num_base_bdevs_operational": 4, 00:16:32.096 "base_bdevs_list": [ 00:16:32.096 { 00:16:32.096 "name": "NewBaseBdev", 00:16:32.096 "uuid": "9fa94785-e702-451c-982c-80afc340e257", 00:16:32.096 "is_configured": true, 00:16:32.096 "data_offset": 2048, 00:16:32.096 "data_size": 63488 00:16:32.096 }, 00:16:32.096 { 00:16:32.096 "name": "BaseBdev2", 00:16:32.096 "uuid": "6a40cd82-bf63-4529-acf2-7668905cfdc7", 00:16:32.096 "is_configured": true, 00:16:32.096 "data_offset": 2048, 00:16:32.096 "data_size": 63488 00:16:32.096 }, 00:16:32.096 { 00:16:32.096 "name": "BaseBdev3", 00:16:32.096 "uuid": "bf3cfbb6-79e6-465f-92e6-0867f6cdf7b8", 00:16:32.096 "is_configured": true, 00:16:32.096 "data_offset": 2048, 00:16:32.096 "data_size": 63488 00:16:32.096 }, 00:16:32.096 { 00:16:32.096 "name": "BaseBdev4", 00:16:32.096 "uuid": "3535c8b3-f58d-4d72-b9e8-3ce271a83d13", 00:16:32.096 "is_configured": true, 00:16:32.096 "data_offset": 2048, 00:16:32.096 "data_size": 63488 00:16:32.096 } 00:16:32.096 ] 00:16:32.096 } 00:16:32.096 } 00:16:32.096 }' 00:16:32.096 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:32.096 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:32.096 BaseBdev2 00:16:32.096 BaseBdev3 00:16:32.096 BaseBdev4' 00:16:32.096 09:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.355 [2024-10-15 09:16:16.264029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:32.355 [2024-10-15 09:16:16.264073] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.355 [2024-10-15 09:16:16.264215] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.355 [2024-10-15 09:16:16.264318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.355 [2024-10-15 09:16:16.264337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70370 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 70370 ']' 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 70370 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:32.355 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70370 00:16:32.616 killing process with pid 70370 00:16:32.616 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:32.616 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:32.616 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70370' 00:16:32.616 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 70370 00:16:32.616 [2024-10-15 09:16:16.302097] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:32.616 09:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 70370 00:16:32.874 [2024-10-15 09:16:16.685515] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:34.249 09:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:34.249 00:16:34.249 real 0m13.129s 00:16:34.249 user 0m21.516s 00:16:34.249 sys 0m2.010s 00:16:34.249 ************************************ 00:16:34.249 END TEST raid_state_function_test_sb 00:16:34.249 ************************************ 00:16:34.249 09:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:34.249 09:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.249 09:16:17 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:16:34.249 09:16:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:34.249 09:16:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:34.249 09:16:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:34.249 ************************************ 00:16:34.249 START TEST raid_superblock_test 00:16:34.249 ************************************ 00:16:34.249 09:16:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:16:34.249 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:16:34.249 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:34.249 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:34.249 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:34.249 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:34.249 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:34.249 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:34.249 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:34.249 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:34.249 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:34.249 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:34.249 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:34.249 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:34.249 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:16:34.249 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:34.249 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:34.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.249 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71054 00:16:34.250 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71054 00:16:34.250 09:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:34.250 09:16:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71054 ']' 00:16:34.250 09:16:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.250 09:16:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.250 09:16:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.250 09:16:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.250 09:16:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.250 [2024-10-15 09:16:17.963524] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:16:34.250 [2024-10-15 09:16:17.963724] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71054 ] 00:16:34.250 [2024-10-15 09:16:18.146525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.507 [2024-10-15 09:16:18.319393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.765 [2024-10-15 09:16:18.553223] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.765 [2024-10-15 09:16:18.553281] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.332 malloc1 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.332 [2024-10-15 09:16:19.111790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:35.332 [2024-10-15 09:16:19.112034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.332 [2024-10-15 09:16:19.112135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:35.332 [2024-10-15 09:16:19.112394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.332 [2024-10-15 09:16:19.115462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.332 [2024-10-15 09:16:19.115639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:35.332 pt1 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.332 malloc2 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.332 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.332 [2024-10-15 09:16:19.175154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:35.332 [2024-10-15 09:16:19.175397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.332 [2024-10-15 09:16:19.175481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:35.332 [2024-10-15 09:16:19.175605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.332 [2024-10-15 09:16:19.178637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.332 [2024-10-15 09:16:19.178801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:35.332 pt2 00:16:35.333 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.333 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:35.333 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:35.333 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:35.333 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:35.333 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:35.333 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:35.333 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:35.333 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:35.333 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:35.333 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.333 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.333 malloc3 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.592 [2024-10-15 09:16:19.266322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:35.592 [2024-10-15 09:16:19.266627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.592 [2024-10-15 09:16:19.266728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:35.592 [2024-10-15 09:16:19.266975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.592 [2024-10-15 09:16:19.270176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.592 [2024-10-15 09:16:19.270341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:35.592 pt3 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.592 malloc4 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.592 [2024-10-15 09:16:19.328842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:35.592 [2024-10-15 09:16:19.329081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.592 [2024-10-15 09:16:19.329292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:35.592 [2024-10-15 09:16:19.329412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.592 [2024-10-15 09:16:19.332831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.592 [2024-10-15 09:16:19.333063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:35.592 pt4 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.592 [2024-10-15 09:16:19.341483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:35.592 [2024-10-15 09:16:19.344292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:35.592 [2024-10-15 09:16:19.344381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:35.592 [2024-10-15 09:16:19.344472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:35.592 [2024-10-15 09:16:19.344769] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:35.592 [2024-10-15 09:16:19.344787] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:35.592 [2024-10-15 09:16:19.345184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:35.592 [2024-10-15 09:16:19.345431] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:35.592 [2024-10-15 09:16:19.345452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:35.592 [2024-10-15 09:16:19.345781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.592 "name": "raid_bdev1", 00:16:35.592 "uuid": "b0cd762b-81f4-43fa-8f38-bc6b320cb64f", 00:16:35.592 "strip_size_kb": 64, 00:16:35.592 "state": "online", 00:16:35.592 "raid_level": "raid0", 00:16:35.592 "superblock": true, 00:16:35.592 "num_base_bdevs": 4, 00:16:35.592 "num_base_bdevs_discovered": 4, 00:16:35.592 "num_base_bdevs_operational": 4, 00:16:35.592 "base_bdevs_list": [ 00:16:35.592 { 00:16:35.592 "name": "pt1", 00:16:35.592 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.592 "is_configured": true, 00:16:35.592 "data_offset": 2048, 00:16:35.592 "data_size": 63488 00:16:35.592 }, 00:16:35.592 { 00:16:35.592 "name": "pt2", 00:16:35.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.592 "is_configured": true, 00:16:35.592 "data_offset": 2048, 00:16:35.592 "data_size": 63488 00:16:35.592 }, 00:16:35.592 { 00:16:35.592 "name": "pt3", 00:16:35.592 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:35.592 "is_configured": true, 00:16:35.592 "data_offset": 2048, 00:16:35.592 "data_size": 63488 00:16:35.592 }, 00:16:35.592 { 00:16:35.592 "name": "pt4", 00:16:35.592 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:35.592 "is_configured": true, 00:16:35.592 "data_offset": 2048, 00:16:35.592 "data_size": 63488 00:16:35.592 } 00:16:35.592 ] 00:16:35.592 }' 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.592 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.160 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:36.160 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:36.160 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:36.160 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:36.160 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:36.160 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:36.160 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:36.160 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:36.160 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.160 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.160 [2024-10-15 09:16:19.874386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.160 09:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.160 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:36.160 "name": "raid_bdev1", 00:16:36.160 "aliases": [ 00:16:36.160 "b0cd762b-81f4-43fa-8f38-bc6b320cb64f" 00:16:36.160 ], 00:16:36.160 "product_name": "Raid Volume", 00:16:36.160 "block_size": 512, 00:16:36.160 "num_blocks": 253952, 00:16:36.160 "uuid": "b0cd762b-81f4-43fa-8f38-bc6b320cb64f", 00:16:36.160 "assigned_rate_limits": { 00:16:36.160 "rw_ios_per_sec": 0, 00:16:36.160 "rw_mbytes_per_sec": 0, 00:16:36.160 "r_mbytes_per_sec": 0, 00:16:36.160 "w_mbytes_per_sec": 0 00:16:36.160 }, 00:16:36.160 "claimed": false, 00:16:36.160 "zoned": false, 00:16:36.160 "supported_io_types": { 00:16:36.160 "read": true, 00:16:36.160 "write": true, 00:16:36.160 "unmap": true, 00:16:36.160 "flush": true, 00:16:36.160 "reset": true, 00:16:36.160 "nvme_admin": false, 00:16:36.160 "nvme_io": false, 00:16:36.160 "nvme_io_md": false, 00:16:36.160 "write_zeroes": true, 00:16:36.160 "zcopy": false, 00:16:36.160 "get_zone_info": false, 00:16:36.160 "zone_management": false, 00:16:36.160 "zone_append": false, 00:16:36.160 "compare": false, 00:16:36.160 "compare_and_write": false, 00:16:36.160 "abort": false, 00:16:36.160 "seek_hole": false, 00:16:36.160 "seek_data": false, 00:16:36.160 "copy": false, 00:16:36.160 "nvme_iov_md": false 00:16:36.160 }, 00:16:36.160 "memory_domains": [ 00:16:36.160 { 00:16:36.160 "dma_device_id": "system", 00:16:36.160 "dma_device_type": 1 00:16:36.160 }, 00:16:36.160 { 00:16:36.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.160 "dma_device_type": 2 00:16:36.160 }, 00:16:36.160 { 00:16:36.160 "dma_device_id": "system", 00:16:36.160 "dma_device_type": 1 00:16:36.160 }, 00:16:36.160 { 00:16:36.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.160 "dma_device_type": 2 00:16:36.160 }, 00:16:36.160 { 00:16:36.160 "dma_device_id": "system", 00:16:36.160 "dma_device_type": 1 00:16:36.160 }, 00:16:36.160 { 00:16:36.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.160 "dma_device_type": 2 00:16:36.160 }, 00:16:36.160 { 00:16:36.160 "dma_device_id": "system", 00:16:36.160 "dma_device_type": 1 00:16:36.160 }, 00:16:36.160 { 00:16:36.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.160 "dma_device_type": 2 00:16:36.160 } 00:16:36.160 ], 00:16:36.160 "driver_specific": { 00:16:36.160 "raid": { 00:16:36.160 "uuid": "b0cd762b-81f4-43fa-8f38-bc6b320cb64f", 00:16:36.160 "strip_size_kb": 64, 00:16:36.160 "state": "online", 00:16:36.160 "raid_level": "raid0", 00:16:36.160 "superblock": true, 00:16:36.160 "num_base_bdevs": 4, 00:16:36.160 "num_base_bdevs_discovered": 4, 00:16:36.160 "num_base_bdevs_operational": 4, 00:16:36.160 "base_bdevs_list": [ 00:16:36.160 { 00:16:36.160 "name": "pt1", 00:16:36.160 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:36.160 "is_configured": true, 00:16:36.160 "data_offset": 2048, 00:16:36.160 "data_size": 63488 00:16:36.160 }, 00:16:36.160 { 00:16:36.160 "name": "pt2", 00:16:36.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.160 "is_configured": true, 00:16:36.160 "data_offset": 2048, 00:16:36.160 "data_size": 63488 00:16:36.160 }, 00:16:36.160 { 00:16:36.160 "name": "pt3", 00:16:36.160 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:36.160 "is_configured": true, 00:16:36.160 "data_offset": 2048, 00:16:36.160 "data_size": 63488 00:16:36.160 }, 00:16:36.160 { 00:16:36.160 "name": "pt4", 00:16:36.160 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:36.160 "is_configured": true, 00:16:36.160 "data_offset": 2048, 00:16:36.160 "data_size": 63488 00:16:36.160 } 00:16:36.160 ] 00:16:36.160 } 00:16:36.160 } 00:16:36.160 }' 00:16:36.160 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:36.160 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:36.160 pt2 00:16:36.160 pt3 00:16:36.160 pt4' 00:16:36.160 09:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.160 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:36.160 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.160 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:36.160 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.160 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.160 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.160 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.160 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.160 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.160 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.160 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:36.160 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.160 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.160 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.420 [2024-10-15 09:16:20.258457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b0cd762b-81f4-43fa-8f38-bc6b320cb64f 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b0cd762b-81f4-43fa-8f38-bc6b320cb64f ']' 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.420 [2024-10-15 09:16:20.306086] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:36.420 [2024-10-15 09:16:20.306127] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.420 [2024-10-15 09:16:20.306279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.420 [2024-10-15 09:16:20.306380] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.420 [2024-10-15 09:16:20.306414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:36.420 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.421 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.421 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.421 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:36.421 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.421 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.680 [2024-10-15 09:16:20.466171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:36.680 [2024-10-15 09:16:20.469088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:36.680 [2024-10-15 09:16:20.469299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:36.680 [2024-10-15 09:16:20.469410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:36.680 [2024-10-15 09:16:20.469562] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:36.680 [2024-10-15 09:16:20.469819] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:36.680 [2024-10-15 09:16:20.470010] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:36.680 [2024-10-15 09:16:20.470238] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:36.680 [2024-10-15 09:16:20.470401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:36.680 [2024-10-15 09:16:20.470541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:36.680 request: 00:16:36.680 { 00:16:36.680 "name": "raid_bdev1", 00:16:36.680 "raid_level": "raid0", 00:16:36.680 "base_bdevs": [ 00:16:36.680 "malloc1", 00:16:36.680 "malloc2", 00:16:36.680 "malloc3", 00:16:36.680 "malloc4" 00:16:36.680 ], 00:16:36.680 "strip_size_kb": 64, 00:16:36.680 "superblock": false, 00:16:36.680 "method": "bdev_raid_create", 00:16:36.680 "req_id": 1 00:16:36.680 } 00:16:36.680 Got JSON-RPC error response 00:16:36.680 response: 00:16:36.680 { 00:16:36.680 "code": -17, 00:16:36.680 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:36.680 } 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.680 [2024-10-15 09:16:20.538993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:36.680 [2024-10-15 09:16:20.539091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.680 [2024-10-15 09:16:20.539123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:36.680 [2024-10-15 09:16:20.539179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.680 [2024-10-15 09:16:20.542407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.680 [2024-10-15 09:16:20.542462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:36.680 [2024-10-15 09:16:20.542640] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:36.680 [2024-10-15 09:16:20.542729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:36.680 pt1 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.680 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.939 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.939 "name": "raid_bdev1", 00:16:36.939 "uuid": "b0cd762b-81f4-43fa-8f38-bc6b320cb64f", 00:16:36.939 "strip_size_kb": 64, 00:16:36.939 "state": "configuring", 00:16:36.939 "raid_level": "raid0", 00:16:36.939 "superblock": true, 00:16:36.939 "num_base_bdevs": 4, 00:16:36.939 "num_base_bdevs_discovered": 1, 00:16:36.939 "num_base_bdevs_operational": 4, 00:16:36.939 "base_bdevs_list": [ 00:16:36.939 { 00:16:36.939 "name": "pt1", 00:16:36.939 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:36.939 "is_configured": true, 00:16:36.939 "data_offset": 2048, 00:16:36.939 "data_size": 63488 00:16:36.939 }, 00:16:36.939 { 00:16:36.939 "name": null, 00:16:36.939 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.939 "is_configured": false, 00:16:36.939 "data_offset": 2048, 00:16:36.939 "data_size": 63488 00:16:36.939 }, 00:16:36.939 { 00:16:36.939 "name": null, 00:16:36.939 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:36.939 "is_configured": false, 00:16:36.939 "data_offset": 2048, 00:16:36.939 "data_size": 63488 00:16:36.939 }, 00:16:36.939 { 00:16:36.939 "name": null, 00:16:36.939 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:36.939 "is_configured": false, 00:16:36.939 "data_offset": 2048, 00:16:36.939 "data_size": 63488 00:16:36.939 } 00:16:36.939 ] 00:16:36.939 }' 00:16:36.939 09:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.939 09:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.198 [2024-10-15 09:16:21.083264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:37.198 [2024-10-15 09:16:21.083383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.198 [2024-10-15 09:16:21.083416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:37.198 [2024-10-15 09:16:21.083436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.198 [2024-10-15 09:16:21.084207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.198 [2024-10-15 09:16:21.084279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:37.198 [2024-10-15 09:16:21.084400] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:37.198 [2024-10-15 09:16:21.084459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:37.198 pt2 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.198 [2024-10-15 09:16:21.091251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.198 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.458 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.458 "name": "raid_bdev1", 00:16:37.458 "uuid": "b0cd762b-81f4-43fa-8f38-bc6b320cb64f", 00:16:37.458 "strip_size_kb": 64, 00:16:37.458 "state": "configuring", 00:16:37.458 "raid_level": "raid0", 00:16:37.458 "superblock": true, 00:16:37.458 "num_base_bdevs": 4, 00:16:37.458 "num_base_bdevs_discovered": 1, 00:16:37.458 "num_base_bdevs_operational": 4, 00:16:37.458 "base_bdevs_list": [ 00:16:37.458 { 00:16:37.458 "name": "pt1", 00:16:37.458 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:37.458 "is_configured": true, 00:16:37.458 "data_offset": 2048, 00:16:37.458 "data_size": 63488 00:16:37.458 }, 00:16:37.458 { 00:16:37.458 "name": null, 00:16:37.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.458 "is_configured": false, 00:16:37.458 "data_offset": 0, 00:16:37.458 "data_size": 63488 00:16:37.458 }, 00:16:37.458 { 00:16:37.458 "name": null, 00:16:37.458 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.458 "is_configured": false, 00:16:37.458 "data_offset": 2048, 00:16:37.458 "data_size": 63488 00:16:37.458 }, 00:16:37.458 { 00:16:37.458 "name": null, 00:16:37.458 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:37.458 "is_configured": false, 00:16:37.458 "data_offset": 2048, 00:16:37.458 "data_size": 63488 00:16:37.458 } 00:16:37.458 ] 00:16:37.458 }' 00:16:37.458 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.458 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.717 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:37.717 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:37.717 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:37.717 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.717 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.717 [2024-10-15 09:16:21.607515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:37.717 [2024-10-15 09:16:21.607604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.717 [2024-10-15 09:16:21.607657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:37.717 [2024-10-15 09:16:21.607673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.717 [2024-10-15 09:16:21.608355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.717 [2024-10-15 09:16:21.608382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:37.717 [2024-10-15 09:16:21.608505] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:37.717 [2024-10-15 09:16:21.608540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:37.717 pt2 00:16:37.717 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.717 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:37.717 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:37.717 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:37.717 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.717 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.717 [2024-10-15 09:16:21.615413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:37.717 [2024-10-15 09:16:21.615480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.717 [2024-10-15 09:16:21.615518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:37.717 [2024-10-15 09:16:21.615534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.717 [2024-10-15 09:16:21.615994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.717 [2024-10-15 09:16:21.616027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:37.717 [2024-10-15 09:16:21.616109] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:37.717 [2024-10-15 09:16:21.616165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:37.717 pt3 00:16:37.717 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.717 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:37.717 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:37.717 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:37.717 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.717 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.717 [2024-10-15 09:16:21.623381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:37.717 [2024-10-15 09:16:21.623442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.717 [2024-10-15 09:16:21.623471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:37.717 [2024-10-15 09:16:21.623485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.717 [2024-10-15 09:16:21.623980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.717 [2024-10-15 09:16:21.624012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:37.717 [2024-10-15 09:16:21.624112] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:37.717 [2024-10-15 09:16:21.624162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:37.717 [2024-10-15 09:16:21.624343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:37.718 [2024-10-15 09:16:21.624359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:37.718 [2024-10-15 09:16:21.624675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:37.718 [2024-10-15 09:16:21.624882] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:37.718 [2024-10-15 09:16:21.624905] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:37.718 [2024-10-15 09:16:21.625065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.718 pt4 00:16:37.718 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.718 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:37.718 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:37.718 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:37.718 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.718 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.718 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:37.718 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.718 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.718 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.718 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.718 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.718 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.718 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.718 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.718 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.718 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.976 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.976 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.976 "name": "raid_bdev1", 00:16:37.976 "uuid": "b0cd762b-81f4-43fa-8f38-bc6b320cb64f", 00:16:37.976 "strip_size_kb": 64, 00:16:37.976 "state": "online", 00:16:37.976 "raid_level": "raid0", 00:16:37.976 "superblock": true, 00:16:37.976 "num_base_bdevs": 4, 00:16:37.976 "num_base_bdevs_discovered": 4, 00:16:37.976 "num_base_bdevs_operational": 4, 00:16:37.976 "base_bdevs_list": [ 00:16:37.976 { 00:16:37.976 "name": "pt1", 00:16:37.976 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:37.976 "is_configured": true, 00:16:37.976 "data_offset": 2048, 00:16:37.976 "data_size": 63488 00:16:37.976 }, 00:16:37.976 { 00:16:37.976 "name": "pt2", 00:16:37.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.976 "is_configured": true, 00:16:37.976 "data_offset": 2048, 00:16:37.976 "data_size": 63488 00:16:37.976 }, 00:16:37.976 { 00:16:37.976 "name": "pt3", 00:16:37.976 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.976 "is_configured": true, 00:16:37.976 "data_offset": 2048, 00:16:37.976 "data_size": 63488 00:16:37.976 }, 00:16:37.976 { 00:16:37.976 "name": "pt4", 00:16:37.976 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:37.976 "is_configured": true, 00:16:37.976 "data_offset": 2048, 00:16:37.976 "data_size": 63488 00:16:37.976 } 00:16:37.976 ] 00:16:37.976 }' 00:16:37.976 09:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.976 09:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.235 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:38.235 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:38.235 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:38.235 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:38.235 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:38.235 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:38.235 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:38.235 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:38.235 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.235 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.235 [2024-10-15 09:16:22.144116] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:38.495 "name": "raid_bdev1", 00:16:38.495 "aliases": [ 00:16:38.495 "b0cd762b-81f4-43fa-8f38-bc6b320cb64f" 00:16:38.495 ], 00:16:38.495 "product_name": "Raid Volume", 00:16:38.495 "block_size": 512, 00:16:38.495 "num_blocks": 253952, 00:16:38.495 "uuid": "b0cd762b-81f4-43fa-8f38-bc6b320cb64f", 00:16:38.495 "assigned_rate_limits": { 00:16:38.495 "rw_ios_per_sec": 0, 00:16:38.495 "rw_mbytes_per_sec": 0, 00:16:38.495 "r_mbytes_per_sec": 0, 00:16:38.495 "w_mbytes_per_sec": 0 00:16:38.495 }, 00:16:38.495 "claimed": false, 00:16:38.495 "zoned": false, 00:16:38.495 "supported_io_types": { 00:16:38.495 "read": true, 00:16:38.495 "write": true, 00:16:38.495 "unmap": true, 00:16:38.495 "flush": true, 00:16:38.495 "reset": true, 00:16:38.495 "nvme_admin": false, 00:16:38.495 "nvme_io": false, 00:16:38.495 "nvme_io_md": false, 00:16:38.495 "write_zeroes": true, 00:16:38.495 "zcopy": false, 00:16:38.495 "get_zone_info": false, 00:16:38.495 "zone_management": false, 00:16:38.495 "zone_append": false, 00:16:38.495 "compare": false, 00:16:38.495 "compare_and_write": false, 00:16:38.495 "abort": false, 00:16:38.495 "seek_hole": false, 00:16:38.495 "seek_data": false, 00:16:38.495 "copy": false, 00:16:38.495 "nvme_iov_md": false 00:16:38.495 }, 00:16:38.495 "memory_domains": [ 00:16:38.495 { 00:16:38.495 "dma_device_id": "system", 00:16:38.495 "dma_device_type": 1 00:16:38.495 }, 00:16:38.495 { 00:16:38.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.495 "dma_device_type": 2 00:16:38.495 }, 00:16:38.495 { 00:16:38.495 "dma_device_id": "system", 00:16:38.495 "dma_device_type": 1 00:16:38.495 }, 00:16:38.495 { 00:16:38.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.495 "dma_device_type": 2 00:16:38.495 }, 00:16:38.495 { 00:16:38.495 "dma_device_id": "system", 00:16:38.495 "dma_device_type": 1 00:16:38.495 }, 00:16:38.495 { 00:16:38.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.495 "dma_device_type": 2 00:16:38.495 }, 00:16:38.495 { 00:16:38.495 "dma_device_id": "system", 00:16:38.495 "dma_device_type": 1 00:16:38.495 }, 00:16:38.495 { 00:16:38.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.495 "dma_device_type": 2 00:16:38.495 } 00:16:38.495 ], 00:16:38.495 "driver_specific": { 00:16:38.495 "raid": { 00:16:38.495 "uuid": "b0cd762b-81f4-43fa-8f38-bc6b320cb64f", 00:16:38.495 "strip_size_kb": 64, 00:16:38.495 "state": "online", 00:16:38.495 "raid_level": "raid0", 00:16:38.495 "superblock": true, 00:16:38.495 "num_base_bdevs": 4, 00:16:38.495 "num_base_bdevs_discovered": 4, 00:16:38.495 "num_base_bdevs_operational": 4, 00:16:38.495 "base_bdevs_list": [ 00:16:38.495 { 00:16:38.495 "name": "pt1", 00:16:38.495 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:38.495 "is_configured": true, 00:16:38.495 "data_offset": 2048, 00:16:38.495 "data_size": 63488 00:16:38.495 }, 00:16:38.495 { 00:16:38.495 "name": "pt2", 00:16:38.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.495 "is_configured": true, 00:16:38.495 "data_offset": 2048, 00:16:38.495 "data_size": 63488 00:16:38.495 }, 00:16:38.495 { 00:16:38.495 "name": "pt3", 00:16:38.495 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:38.495 "is_configured": true, 00:16:38.495 "data_offset": 2048, 00:16:38.495 "data_size": 63488 00:16:38.495 }, 00:16:38.495 { 00:16:38.495 "name": "pt4", 00:16:38.495 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:38.495 "is_configured": true, 00:16:38.495 "data_offset": 2048, 00:16:38.495 "data_size": 63488 00:16:38.495 } 00:16:38.495 ] 00:16:38.495 } 00:16:38.495 } 00:16:38.495 }' 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:38.495 pt2 00:16:38.495 pt3 00:16:38.495 pt4' 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.495 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.754 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.754 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.754 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.754 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:38.754 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.754 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.754 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.754 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.754 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.754 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.754 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.754 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:38.754 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.754 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.754 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.754 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.754 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.755 [2024-10-15 09:16:22.544092] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b0cd762b-81f4-43fa-8f38-bc6b320cb64f '!=' b0cd762b-81f4-43fa-8f38-bc6b320cb64f ']' 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71054 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71054 ']' 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71054 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71054 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71054' 00:16:38.755 killing process with pid 71054 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 71054 00:16:38.755 [2024-10-15 09:16:22.638956] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:38.755 09:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 71054 00:16:38.755 [2024-10-15 09:16:22.639251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.755 [2024-10-15 09:16:22.639369] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.755 [2024-10-15 09:16:22.639387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:39.323 [2024-10-15 09:16:23.031199] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:40.271 09:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:40.271 00:16:40.271 real 0m6.308s 00:16:40.271 user 0m9.356s 00:16:40.271 sys 0m1.010s 00:16:40.271 09:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:40.271 ************************************ 00:16:40.271 END TEST raid_superblock_test 00:16:40.271 ************************************ 00:16:40.271 09:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.530 09:16:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:16:40.530 09:16:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:40.530 09:16:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:40.530 09:16:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.530 ************************************ 00:16:40.530 START TEST raid_read_error_test 00:16:40.530 ************************************ 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BlR4QdAL9g 00:16:40.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71324 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71324 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 71324 ']' 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:40.530 09:16:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.530 [2024-10-15 09:16:24.352575] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:16:40.530 [2024-10-15 09:16:24.353006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71324 ] 00:16:40.789 [2024-10-15 09:16:24.536004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.789 [2024-10-15 09:16:24.708955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.048 [2024-10-15 09:16:24.934970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.048 [2024-10-15 09:16:24.935307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.616 BaseBdev1_malloc 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.616 true 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.616 [2024-10-15 09:16:25.399675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:41.616 [2024-10-15 09:16:25.399762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.616 [2024-10-15 09:16:25.399794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:41.616 [2024-10-15 09:16:25.399814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.616 [2024-10-15 09:16:25.402900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.616 [2024-10-15 09:16:25.403148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:41.616 BaseBdev1 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.616 BaseBdev2_malloc 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.616 true 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.616 [2024-10-15 09:16:25.468435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:41.616 [2024-10-15 09:16:25.468521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.616 [2024-10-15 09:16:25.468549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:41.616 [2024-10-15 09:16:25.468568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.616 [2024-10-15 09:16:25.471715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.616 [2024-10-15 09:16:25.471764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:41.616 BaseBdev2 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.616 BaseBdev3_malloc 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.616 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.878 true 00:16:41.878 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.878 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:41.878 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.878 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.878 [2024-10-15 09:16:25.550893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:41.878 [2024-10-15 09:16:25.551169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.878 [2024-10-15 09:16:25.551209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:41.878 [2024-10-15 09:16:25.551230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.878 [2024-10-15 09:16:25.554303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.878 [2024-10-15 09:16:25.554347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:41.878 BaseBdev3 00:16:41.878 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.878 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:41.878 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:41.878 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.878 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.878 BaseBdev4_malloc 00:16:41.878 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.878 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:41.878 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.878 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.878 true 00:16:41.878 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.878 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:41.878 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.878 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.879 [2024-10-15 09:16:25.617760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:41.879 [2024-10-15 09:16:25.617855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.879 [2024-10-15 09:16:25.617905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:41.879 [2024-10-15 09:16:25.617931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.879 [2024-10-15 09:16:25.621078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.879 [2024-10-15 09:16:25.621161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:41.879 BaseBdev4 00:16:41.879 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.879 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:41.879 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.879 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.879 [2024-10-15 09:16:25.630000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.879 [2024-10-15 09:16:25.632819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:41.879 [2024-10-15 09:16:25.632937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:41.879 [2024-10-15 09:16:25.633042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:41.879 [2024-10-15 09:16:25.633358] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:41.879 [2024-10-15 09:16:25.633390] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:41.879 [2024-10-15 09:16:25.633746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:41.879 [2024-10-15 09:16:25.633989] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:41.879 [2024-10-15 09:16:25.634006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:41.879 [2024-10-15 09:16:25.634257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.879 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.879 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:41.879 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.879 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.879 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:41.879 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.879 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.879 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.879 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.879 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.880 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.880 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.880 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.880 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.880 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.880 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.880 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.880 "name": "raid_bdev1", 00:16:41.880 "uuid": "20cbae5a-7cb6-488b-85d4-9ee71a42e6ee", 00:16:41.880 "strip_size_kb": 64, 00:16:41.880 "state": "online", 00:16:41.880 "raid_level": "raid0", 00:16:41.880 "superblock": true, 00:16:41.880 "num_base_bdevs": 4, 00:16:41.880 "num_base_bdevs_discovered": 4, 00:16:41.880 "num_base_bdevs_operational": 4, 00:16:41.880 "base_bdevs_list": [ 00:16:41.880 { 00:16:41.880 "name": "BaseBdev1", 00:16:41.880 "uuid": "37de37bb-662b-5c16-81b5-438aa1d509df", 00:16:41.880 "is_configured": true, 00:16:41.880 "data_offset": 2048, 00:16:41.880 "data_size": 63488 00:16:41.880 }, 00:16:41.880 { 00:16:41.880 "name": "BaseBdev2", 00:16:41.880 "uuid": "c436df65-0ccc-5d0e-9eb4-681de7ecc356", 00:16:41.880 "is_configured": true, 00:16:41.880 "data_offset": 2048, 00:16:41.880 "data_size": 63488 00:16:41.880 }, 00:16:41.880 { 00:16:41.880 "name": "BaseBdev3", 00:16:41.880 "uuid": "5a10fede-303e-58f3-bd95-e600745195c9", 00:16:41.880 "is_configured": true, 00:16:41.880 "data_offset": 2048, 00:16:41.880 "data_size": 63488 00:16:41.880 }, 00:16:41.880 { 00:16:41.880 "name": "BaseBdev4", 00:16:41.880 "uuid": "e1631fb2-33ef-55eb-b985-fa097f6a3eff", 00:16:41.880 "is_configured": true, 00:16:41.880 "data_offset": 2048, 00:16:41.880 "data_size": 63488 00:16:41.880 } 00:16:41.880 ] 00:16:41.880 }' 00:16:41.880 09:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.880 09:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.448 09:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:42.448 09:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:42.448 [2024-10-15 09:16:26.336161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:43.384 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:43.384 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.384 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.384 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.384 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:43.384 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:43.384 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:43.384 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:43.384 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.384 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.385 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:43.385 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.385 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.385 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.385 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.385 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.385 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.385 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.385 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.385 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.385 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.385 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.385 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.385 "name": "raid_bdev1", 00:16:43.385 "uuid": "20cbae5a-7cb6-488b-85d4-9ee71a42e6ee", 00:16:43.385 "strip_size_kb": 64, 00:16:43.385 "state": "online", 00:16:43.385 "raid_level": "raid0", 00:16:43.385 "superblock": true, 00:16:43.385 "num_base_bdevs": 4, 00:16:43.385 "num_base_bdevs_discovered": 4, 00:16:43.385 "num_base_bdevs_operational": 4, 00:16:43.385 "base_bdevs_list": [ 00:16:43.385 { 00:16:43.385 "name": "BaseBdev1", 00:16:43.385 "uuid": "37de37bb-662b-5c16-81b5-438aa1d509df", 00:16:43.385 "is_configured": true, 00:16:43.385 "data_offset": 2048, 00:16:43.385 "data_size": 63488 00:16:43.385 }, 00:16:43.385 { 00:16:43.385 "name": "BaseBdev2", 00:16:43.385 "uuid": "c436df65-0ccc-5d0e-9eb4-681de7ecc356", 00:16:43.385 "is_configured": true, 00:16:43.385 "data_offset": 2048, 00:16:43.385 "data_size": 63488 00:16:43.385 }, 00:16:43.385 { 00:16:43.385 "name": "BaseBdev3", 00:16:43.385 "uuid": "5a10fede-303e-58f3-bd95-e600745195c9", 00:16:43.385 "is_configured": true, 00:16:43.385 "data_offset": 2048, 00:16:43.385 "data_size": 63488 00:16:43.385 }, 00:16:43.385 { 00:16:43.385 "name": "BaseBdev4", 00:16:43.385 "uuid": "e1631fb2-33ef-55eb-b985-fa097f6a3eff", 00:16:43.385 "is_configured": true, 00:16:43.385 "data_offset": 2048, 00:16:43.385 "data_size": 63488 00:16:43.385 } 00:16:43.385 ] 00:16:43.385 }' 00:16:43.385 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.385 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.952 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:43.952 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.952 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.952 [2024-10-15 09:16:27.726706] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:43.952 [2024-10-15 09:16:27.726797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.952 [2024-10-15 09:16:27.730306] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.952 [2024-10-15 09:16:27.730394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.953 [2024-10-15 09:16:27.730507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.953 [2024-10-15 09:16:27.730530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:43.953 { 00:16:43.953 "results": [ 00:16:43.953 { 00:16:43.953 "job": "raid_bdev1", 00:16:43.953 "core_mask": "0x1", 00:16:43.953 "workload": "randrw", 00:16:43.953 "percentage": 50, 00:16:43.953 "status": "finished", 00:16:43.953 "queue_depth": 1, 00:16:43.953 "io_size": 131072, 00:16:43.953 "runtime": 1.38767, 00:16:43.953 "iops": 9857.530969178551, 00:16:43.953 "mibps": 1232.191371147319, 00:16:43.953 "io_failed": 1, 00:16:43.953 "io_timeout": 0, 00:16:43.953 "avg_latency_us": 143.22446358320042, 00:16:43.953 "min_latency_us": 38.167272727272724, 00:16:43.953 "max_latency_us": 1921.3963636363637 00:16:43.953 } 00:16:43.953 ], 00:16:43.953 "core_count": 1 00:16:43.953 } 00:16:43.953 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.953 09:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71324 00:16:43.953 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 71324 ']' 00:16:43.953 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 71324 00:16:43.953 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:16:43.953 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:43.953 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71324 00:16:43.953 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:43.953 killing process with pid 71324 00:16:43.953 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:43.953 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71324' 00:16:43.953 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 71324 00:16:43.953 09:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 71324 00:16:43.953 [2024-10-15 09:16:27.769707] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:44.213 [2024-10-15 09:16:28.063402] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:45.592 09:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:45.592 09:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BlR4QdAL9g 00:16:45.592 09:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:45.592 09:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:16:45.592 09:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:16:45.592 ************************************ 00:16:45.592 09:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:45.592 09:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:45.592 09:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:16:45.592 00:16:45.592 real 0m5.037s 00:16:45.592 user 0m6.166s 00:16:45.592 sys 0m0.693s 00:16:45.592 09:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:45.592 09:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.592 END TEST raid_read_error_test 00:16:45.592 ************************************ 00:16:45.592 09:16:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:16:45.592 09:16:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:45.592 09:16:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:45.592 09:16:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:45.592 ************************************ 00:16:45.592 START TEST raid_write_error_test 00:16:45.592 ************************************ 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:16:45.592 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:45.593 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:45.593 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:45.593 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VOMFuINKMf 00:16:45.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.593 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71475 00:16:45.593 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71475 00:16:45.593 09:16:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 71475 ']' 00:16:45.593 09:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:45.593 09:16:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.593 09:16:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:45.593 09:16:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.593 09:16:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:45.593 09:16:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.593 [2024-10-15 09:16:29.446402] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:16:45.593 [2024-10-15 09:16:29.446604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71475 ] 00:16:45.851 [2024-10-15 09:16:29.623857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.109 [2024-10-15 09:16:29.778995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.109 [2024-10-15 09:16:30.007031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.109 [2024-10-15 09:16:30.007427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.675 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:46.675 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:46.675 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:46.675 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:46.675 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.675 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.675 BaseBdev1_malloc 00:16:46.675 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.675 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:46.675 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.675 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.675 true 00:16:46.675 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.676 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:46.676 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.676 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.676 [2024-10-15 09:16:30.553945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:46.676 [2024-10-15 09:16:30.554022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.676 [2024-10-15 09:16:30.554063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:46.676 [2024-10-15 09:16:30.554083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.676 [2024-10-15 09:16:30.557725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.676 [2024-10-15 09:16:30.557792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:46.676 BaseBdev1 00:16:46.676 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.676 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:46.676 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:46.676 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.676 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.934 BaseBdev2_malloc 00:16:46.934 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.934 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:46.934 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.934 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.934 true 00:16:46.934 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.934 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:46.934 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.934 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.934 [2024-10-15 09:16:30.625856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:46.934 [2024-10-15 09:16:30.625959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.934 [2024-10-15 09:16:30.625988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:46.934 [2024-10-15 09:16:30.626007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.934 [2024-10-15 09:16:30.629231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.934 [2024-10-15 09:16:30.629336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:46.934 BaseBdev2 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.935 BaseBdev3_malloc 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.935 true 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.935 [2024-10-15 09:16:30.708636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:46.935 [2024-10-15 09:16:30.708718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.935 [2024-10-15 09:16:30.708751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:46.935 [2024-10-15 09:16:30.708771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.935 [2024-10-15 09:16:30.712305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.935 [2024-10-15 09:16:30.712357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:46.935 BaseBdev3 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.935 BaseBdev4_malloc 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.935 true 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.935 [2024-10-15 09:16:30.770565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:46.935 [2024-10-15 09:16:30.770833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.935 [2024-10-15 09:16:30.770872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:46.935 [2024-10-15 09:16:30.770893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.935 [2024-10-15 09:16:30.774291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.935 [2024-10-15 09:16:30.774478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:46.935 BaseBdev4 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.935 [2024-10-15 09:16:30.778843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.935 [2024-10-15 09:16:30.781733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.935 [2024-10-15 09:16:30.782037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:46.935 [2024-10-15 09:16:30.782277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:46.935 [2024-10-15 09:16:30.782747] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:46.935 [2024-10-15 09:16:30.782974] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:46.935 [2024-10-15 09:16:30.783384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:46.935 [2024-10-15 09:16:30.783763] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:46.935 [2024-10-15 09:16:30.783888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:46.935 [2024-10-15 09:16:30.784298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.935 "name": "raid_bdev1", 00:16:46.935 "uuid": "dea098ee-2791-4e78-a6a0-ce1e8df863c6", 00:16:46.935 "strip_size_kb": 64, 00:16:46.935 "state": "online", 00:16:46.935 "raid_level": "raid0", 00:16:46.935 "superblock": true, 00:16:46.935 "num_base_bdevs": 4, 00:16:46.935 "num_base_bdevs_discovered": 4, 00:16:46.935 "num_base_bdevs_operational": 4, 00:16:46.935 "base_bdevs_list": [ 00:16:46.935 { 00:16:46.935 "name": "BaseBdev1", 00:16:46.935 "uuid": "59dc58b4-de56-5a34-bdcc-7358ca04226c", 00:16:46.935 "is_configured": true, 00:16:46.935 "data_offset": 2048, 00:16:46.935 "data_size": 63488 00:16:46.935 }, 00:16:46.935 { 00:16:46.935 "name": "BaseBdev2", 00:16:46.935 "uuid": "c9ed8951-978f-52b5-9237-1c6eed2554ba", 00:16:46.935 "is_configured": true, 00:16:46.935 "data_offset": 2048, 00:16:46.935 "data_size": 63488 00:16:46.935 }, 00:16:46.935 { 00:16:46.935 "name": "BaseBdev3", 00:16:46.935 "uuid": "0aafc836-34ed-59f4-97ac-0a7a43d39ed8", 00:16:46.935 "is_configured": true, 00:16:46.935 "data_offset": 2048, 00:16:46.935 "data_size": 63488 00:16:46.935 }, 00:16:46.935 { 00:16:46.935 "name": "BaseBdev4", 00:16:46.935 "uuid": "79f8dd4b-4f21-519b-ac89-fbfdc498837c", 00:16:46.935 "is_configured": true, 00:16:46.935 "data_offset": 2048, 00:16:46.935 "data_size": 63488 00:16:46.935 } 00:16:46.935 ] 00:16:46.935 }' 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.935 09:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.504 09:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:47.504 09:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:47.504 [2024-10-15 09:16:31.416661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.441 "name": "raid_bdev1", 00:16:48.441 "uuid": "dea098ee-2791-4e78-a6a0-ce1e8df863c6", 00:16:48.441 "strip_size_kb": 64, 00:16:48.441 "state": "online", 00:16:48.441 "raid_level": "raid0", 00:16:48.441 "superblock": true, 00:16:48.441 "num_base_bdevs": 4, 00:16:48.441 "num_base_bdevs_discovered": 4, 00:16:48.441 "num_base_bdevs_operational": 4, 00:16:48.441 "base_bdevs_list": [ 00:16:48.441 { 00:16:48.441 "name": "BaseBdev1", 00:16:48.441 "uuid": "59dc58b4-de56-5a34-bdcc-7358ca04226c", 00:16:48.441 "is_configured": true, 00:16:48.441 "data_offset": 2048, 00:16:48.441 "data_size": 63488 00:16:48.441 }, 00:16:48.441 { 00:16:48.441 "name": "BaseBdev2", 00:16:48.441 "uuid": "c9ed8951-978f-52b5-9237-1c6eed2554ba", 00:16:48.441 "is_configured": true, 00:16:48.441 "data_offset": 2048, 00:16:48.441 "data_size": 63488 00:16:48.441 }, 00:16:48.441 { 00:16:48.441 "name": "BaseBdev3", 00:16:48.441 "uuid": "0aafc836-34ed-59f4-97ac-0a7a43d39ed8", 00:16:48.441 "is_configured": true, 00:16:48.441 "data_offset": 2048, 00:16:48.441 "data_size": 63488 00:16:48.441 }, 00:16:48.441 { 00:16:48.441 "name": "BaseBdev4", 00:16:48.441 "uuid": "79f8dd4b-4f21-519b-ac89-fbfdc498837c", 00:16:48.441 "is_configured": true, 00:16:48.441 "data_offset": 2048, 00:16:48.441 "data_size": 63488 00:16:48.441 } 00:16:48.441 ] 00:16:48.441 }' 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.441 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.009 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:49.009 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.009 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.009 [2024-10-15 09:16:32.835273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:49.009 [2024-10-15 09:16:32.835450] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.009 [2024-10-15 09:16:32.839022] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.009 [2024-10-15 09:16:32.839245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.009 [2024-10-15 09:16:32.839432] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.009 [2024-10-15 09:16:32.839474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:49.009 { 00:16:49.009 "results": [ 00:16:49.009 { 00:16:49.009 "job": "raid_bdev1", 00:16:49.009 "core_mask": "0x1", 00:16:49.009 "workload": "randrw", 00:16:49.009 "percentage": 50, 00:16:49.009 "status": "finished", 00:16:49.009 "queue_depth": 1, 00:16:49.009 "io_size": 131072, 00:16:49.009 "runtime": 1.416301, 00:16:49.009 "iops": 9545.993401120242, 00:16:49.009 "mibps": 1193.2491751400303, 00:16:49.009 "io_failed": 1, 00:16:49.009 "io_timeout": 0, 00:16:49.009 "avg_latency_us": 147.7378666182571, 00:16:49.009 "min_latency_us": 38.4, 00:16:49.009 "max_latency_us": 2129.92 00:16:49.009 } 00:16:49.009 ], 00:16:49.009 "core_count": 1 00:16:49.009 } 00:16:49.009 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.009 09:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71475 00:16:49.009 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 71475 ']' 00:16:49.009 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 71475 00:16:49.009 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:16:49.009 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:49.009 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71475 00:16:49.009 killing process with pid 71475 00:16:49.009 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:49.009 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:49.009 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71475' 00:16:49.009 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 71475 00:16:49.009 [2024-10-15 09:16:32.873963] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:49.009 09:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 71475 00:16:49.576 [2024-10-15 09:16:33.201695] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:50.511 09:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VOMFuINKMf 00:16:50.511 09:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:50.511 09:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:50.511 09:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:16:50.511 09:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:16:50.511 09:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:50.511 09:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:50.511 09:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:16:50.511 00:16:50.511 real 0m5.124s 00:16:50.511 user 0m6.225s 00:16:50.511 sys 0m0.703s 00:16:50.511 09:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:50.511 09:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.511 ************************************ 00:16:50.511 END TEST raid_write_error_test 00:16:50.511 ************************************ 00:16:50.770 09:16:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:50.770 09:16:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:16:50.770 09:16:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:50.770 09:16:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:50.770 09:16:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.770 ************************************ 00:16:50.770 START TEST raid_state_function_test 00:16:50.770 ************************************ 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71619 00:16:50.770 Process raid pid: 71619 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71619' 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71619 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71619 ']' 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:50.770 09:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.770 [2024-10-15 09:16:34.617398] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:16:50.770 [2024-10-15 09:16:34.617609] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.029 [2024-10-15 09:16:34.798763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.319 [2024-10-15 09:16:34.974294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.319 [2024-10-15 09:16:35.209609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.319 [2024-10-15 09:16:35.209667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.887 [2024-10-15 09:16:35.608330] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:51.887 [2024-10-15 09:16:35.608429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:51.887 [2024-10-15 09:16:35.608463] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.887 [2024-10-15 09:16:35.608479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.887 [2024-10-15 09:16:35.608490] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:51.887 [2024-10-15 09:16:35.608519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:51.887 [2024-10-15 09:16:35.608529] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:51.887 [2024-10-15 09:16:35.608543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.887 "name": "Existed_Raid", 00:16:51.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.887 "strip_size_kb": 64, 00:16:51.887 "state": "configuring", 00:16:51.887 "raid_level": "concat", 00:16:51.887 "superblock": false, 00:16:51.887 "num_base_bdevs": 4, 00:16:51.887 "num_base_bdevs_discovered": 0, 00:16:51.887 "num_base_bdevs_operational": 4, 00:16:51.887 "base_bdevs_list": [ 00:16:51.887 { 00:16:51.887 "name": "BaseBdev1", 00:16:51.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.887 "is_configured": false, 00:16:51.887 "data_offset": 0, 00:16:51.887 "data_size": 0 00:16:51.887 }, 00:16:51.887 { 00:16:51.887 "name": "BaseBdev2", 00:16:51.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.887 "is_configured": false, 00:16:51.887 "data_offset": 0, 00:16:51.887 "data_size": 0 00:16:51.887 }, 00:16:51.887 { 00:16:51.887 "name": "BaseBdev3", 00:16:51.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.887 "is_configured": false, 00:16:51.887 "data_offset": 0, 00:16:51.887 "data_size": 0 00:16:51.887 }, 00:16:51.887 { 00:16:51.887 "name": "BaseBdev4", 00:16:51.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.887 "is_configured": false, 00:16:51.887 "data_offset": 0, 00:16:51.887 "data_size": 0 00:16:51.887 } 00:16:51.887 ] 00:16:51.887 }' 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.887 09:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.454 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:52.454 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.454 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.454 [2024-10-15 09:16:36.128482] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:52.454 [2024-10-15 09:16:36.128584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:52.454 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.454 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:52.454 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.455 [2024-10-15 09:16:36.136502] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.455 [2024-10-15 09:16:36.136568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.455 [2024-10-15 09:16:36.136599] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.455 [2024-10-15 09:16:36.136615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.455 [2024-10-15 09:16:36.136625] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:52.455 [2024-10-15 09:16:36.136656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:52.455 [2024-10-15 09:16:36.136665] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:52.455 [2024-10-15 09:16:36.136680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.455 [2024-10-15 09:16:36.184490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.455 BaseBdev1 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.455 [ 00:16:52.455 { 00:16:52.455 "name": "BaseBdev1", 00:16:52.455 "aliases": [ 00:16:52.455 "d86544d9-f30e-42fe-8380-0ef3bb50dded" 00:16:52.455 ], 00:16:52.455 "product_name": "Malloc disk", 00:16:52.455 "block_size": 512, 00:16:52.455 "num_blocks": 65536, 00:16:52.455 "uuid": "d86544d9-f30e-42fe-8380-0ef3bb50dded", 00:16:52.455 "assigned_rate_limits": { 00:16:52.455 "rw_ios_per_sec": 0, 00:16:52.455 "rw_mbytes_per_sec": 0, 00:16:52.455 "r_mbytes_per_sec": 0, 00:16:52.455 "w_mbytes_per_sec": 0 00:16:52.455 }, 00:16:52.455 "claimed": true, 00:16:52.455 "claim_type": "exclusive_write", 00:16:52.455 "zoned": false, 00:16:52.455 "supported_io_types": { 00:16:52.455 "read": true, 00:16:52.455 "write": true, 00:16:52.455 "unmap": true, 00:16:52.455 "flush": true, 00:16:52.455 "reset": true, 00:16:52.455 "nvme_admin": false, 00:16:52.455 "nvme_io": false, 00:16:52.455 "nvme_io_md": false, 00:16:52.455 "write_zeroes": true, 00:16:52.455 "zcopy": true, 00:16:52.455 "get_zone_info": false, 00:16:52.455 "zone_management": false, 00:16:52.455 "zone_append": false, 00:16:52.455 "compare": false, 00:16:52.455 "compare_and_write": false, 00:16:52.455 "abort": true, 00:16:52.455 "seek_hole": false, 00:16:52.455 "seek_data": false, 00:16:52.455 "copy": true, 00:16:52.455 "nvme_iov_md": false 00:16:52.455 }, 00:16:52.455 "memory_domains": [ 00:16:52.455 { 00:16:52.455 "dma_device_id": "system", 00:16:52.455 "dma_device_type": 1 00:16:52.455 }, 00:16:52.455 { 00:16:52.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.455 "dma_device_type": 2 00:16:52.455 } 00:16:52.455 ], 00:16:52.455 "driver_specific": {} 00:16:52.455 } 00:16:52.455 ] 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.455 "name": "Existed_Raid", 00:16:52.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.455 "strip_size_kb": 64, 00:16:52.455 "state": "configuring", 00:16:52.455 "raid_level": "concat", 00:16:52.455 "superblock": false, 00:16:52.455 "num_base_bdevs": 4, 00:16:52.455 "num_base_bdevs_discovered": 1, 00:16:52.455 "num_base_bdevs_operational": 4, 00:16:52.455 "base_bdevs_list": [ 00:16:52.455 { 00:16:52.455 "name": "BaseBdev1", 00:16:52.455 "uuid": "d86544d9-f30e-42fe-8380-0ef3bb50dded", 00:16:52.455 "is_configured": true, 00:16:52.455 "data_offset": 0, 00:16:52.455 "data_size": 65536 00:16:52.455 }, 00:16:52.455 { 00:16:52.455 "name": "BaseBdev2", 00:16:52.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.455 "is_configured": false, 00:16:52.455 "data_offset": 0, 00:16:52.455 "data_size": 0 00:16:52.455 }, 00:16:52.455 { 00:16:52.455 "name": "BaseBdev3", 00:16:52.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.455 "is_configured": false, 00:16:52.455 "data_offset": 0, 00:16:52.455 "data_size": 0 00:16:52.455 }, 00:16:52.455 { 00:16:52.455 "name": "BaseBdev4", 00:16:52.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.455 "is_configured": false, 00:16:52.455 "data_offset": 0, 00:16:52.455 "data_size": 0 00:16:52.455 } 00:16:52.455 ] 00:16:52.455 }' 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.455 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.023 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:53.023 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.024 [2024-10-15 09:16:36.728732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.024 [2024-10-15 09:16:36.728821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.024 [2024-10-15 09:16:36.736769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.024 [2024-10-15 09:16:36.739652] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.024 [2024-10-15 09:16:36.739720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.024 [2024-10-15 09:16:36.739752] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:53.024 [2024-10-15 09:16:36.739769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:53.024 [2024-10-15 09:16:36.739780] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:53.024 [2024-10-15 09:16:36.739793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.024 "name": "Existed_Raid", 00:16:53.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.024 "strip_size_kb": 64, 00:16:53.024 "state": "configuring", 00:16:53.024 "raid_level": "concat", 00:16:53.024 "superblock": false, 00:16:53.024 "num_base_bdevs": 4, 00:16:53.024 "num_base_bdevs_discovered": 1, 00:16:53.024 "num_base_bdevs_operational": 4, 00:16:53.024 "base_bdevs_list": [ 00:16:53.024 { 00:16:53.024 "name": "BaseBdev1", 00:16:53.024 "uuid": "d86544d9-f30e-42fe-8380-0ef3bb50dded", 00:16:53.024 "is_configured": true, 00:16:53.024 "data_offset": 0, 00:16:53.024 "data_size": 65536 00:16:53.024 }, 00:16:53.024 { 00:16:53.024 "name": "BaseBdev2", 00:16:53.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.024 "is_configured": false, 00:16:53.024 "data_offset": 0, 00:16:53.024 "data_size": 0 00:16:53.024 }, 00:16:53.024 { 00:16:53.024 "name": "BaseBdev3", 00:16:53.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.024 "is_configured": false, 00:16:53.024 "data_offset": 0, 00:16:53.024 "data_size": 0 00:16:53.024 }, 00:16:53.024 { 00:16:53.024 "name": "BaseBdev4", 00:16:53.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.024 "is_configured": false, 00:16:53.024 "data_offset": 0, 00:16:53.024 "data_size": 0 00:16:53.024 } 00:16:53.024 ] 00:16:53.024 }' 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.024 09:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.591 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:53.591 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.592 [2024-10-15 09:16:37.300235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.592 BaseBdev2 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.592 [ 00:16:53.592 { 00:16:53.592 "name": "BaseBdev2", 00:16:53.592 "aliases": [ 00:16:53.592 "944f9c2b-5ff7-4639-ab9f-6eaf089385e9" 00:16:53.592 ], 00:16:53.592 "product_name": "Malloc disk", 00:16:53.592 "block_size": 512, 00:16:53.592 "num_blocks": 65536, 00:16:53.592 "uuid": "944f9c2b-5ff7-4639-ab9f-6eaf089385e9", 00:16:53.592 "assigned_rate_limits": { 00:16:53.592 "rw_ios_per_sec": 0, 00:16:53.592 "rw_mbytes_per_sec": 0, 00:16:53.592 "r_mbytes_per_sec": 0, 00:16:53.592 "w_mbytes_per_sec": 0 00:16:53.592 }, 00:16:53.592 "claimed": true, 00:16:53.592 "claim_type": "exclusive_write", 00:16:53.592 "zoned": false, 00:16:53.592 "supported_io_types": { 00:16:53.592 "read": true, 00:16:53.592 "write": true, 00:16:53.592 "unmap": true, 00:16:53.592 "flush": true, 00:16:53.592 "reset": true, 00:16:53.592 "nvme_admin": false, 00:16:53.592 "nvme_io": false, 00:16:53.592 "nvme_io_md": false, 00:16:53.592 "write_zeroes": true, 00:16:53.592 "zcopy": true, 00:16:53.592 "get_zone_info": false, 00:16:53.592 "zone_management": false, 00:16:53.592 "zone_append": false, 00:16:53.592 "compare": false, 00:16:53.592 "compare_and_write": false, 00:16:53.592 "abort": true, 00:16:53.592 "seek_hole": false, 00:16:53.592 "seek_data": false, 00:16:53.592 "copy": true, 00:16:53.592 "nvme_iov_md": false 00:16:53.592 }, 00:16:53.592 "memory_domains": [ 00:16:53.592 { 00:16:53.592 "dma_device_id": "system", 00:16:53.592 "dma_device_type": 1 00:16:53.592 }, 00:16:53.592 { 00:16:53.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.592 "dma_device_type": 2 00:16:53.592 } 00:16:53.592 ], 00:16:53.592 "driver_specific": {} 00:16:53.592 } 00:16:53.592 ] 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.592 "name": "Existed_Raid", 00:16:53.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.592 "strip_size_kb": 64, 00:16:53.592 "state": "configuring", 00:16:53.592 "raid_level": "concat", 00:16:53.592 "superblock": false, 00:16:53.592 "num_base_bdevs": 4, 00:16:53.592 "num_base_bdevs_discovered": 2, 00:16:53.592 "num_base_bdevs_operational": 4, 00:16:53.592 "base_bdevs_list": [ 00:16:53.592 { 00:16:53.592 "name": "BaseBdev1", 00:16:53.592 "uuid": "d86544d9-f30e-42fe-8380-0ef3bb50dded", 00:16:53.592 "is_configured": true, 00:16:53.592 "data_offset": 0, 00:16:53.592 "data_size": 65536 00:16:53.592 }, 00:16:53.592 { 00:16:53.592 "name": "BaseBdev2", 00:16:53.592 "uuid": "944f9c2b-5ff7-4639-ab9f-6eaf089385e9", 00:16:53.592 "is_configured": true, 00:16:53.592 "data_offset": 0, 00:16:53.592 "data_size": 65536 00:16:53.592 }, 00:16:53.592 { 00:16:53.592 "name": "BaseBdev3", 00:16:53.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.592 "is_configured": false, 00:16:53.592 "data_offset": 0, 00:16:53.592 "data_size": 0 00:16:53.592 }, 00:16:53.592 { 00:16:53.592 "name": "BaseBdev4", 00:16:53.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.592 "is_configured": false, 00:16:53.592 "data_offset": 0, 00:16:53.592 "data_size": 0 00:16:53.592 } 00:16:53.592 ] 00:16:53.592 }' 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.592 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.232 [2024-10-15 09:16:37.915303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:54.232 BaseBdev3 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.232 [ 00:16:54.232 { 00:16:54.232 "name": "BaseBdev3", 00:16:54.232 "aliases": [ 00:16:54.232 "44ab6a8e-ad2f-4706-b9bd-ebb21b341cad" 00:16:54.232 ], 00:16:54.232 "product_name": "Malloc disk", 00:16:54.232 "block_size": 512, 00:16:54.232 "num_blocks": 65536, 00:16:54.232 "uuid": "44ab6a8e-ad2f-4706-b9bd-ebb21b341cad", 00:16:54.232 "assigned_rate_limits": { 00:16:54.232 "rw_ios_per_sec": 0, 00:16:54.232 "rw_mbytes_per_sec": 0, 00:16:54.232 "r_mbytes_per_sec": 0, 00:16:54.232 "w_mbytes_per_sec": 0 00:16:54.232 }, 00:16:54.232 "claimed": true, 00:16:54.232 "claim_type": "exclusive_write", 00:16:54.232 "zoned": false, 00:16:54.232 "supported_io_types": { 00:16:54.232 "read": true, 00:16:54.232 "write": true, 00:16:54.232 "unmap": true, 00:16:54.232 "flush": true, 00:16:54.232 "reset": true, 00:16:54.232 "nvme_admin": false, 00:16:54.232 "nvme_io": false, 00:16:54.232 "nvme_io_md": false, 00:16:54.232 "write_zeroes": true, 00:16:54.232 "zcopy": true, 00:16:54.232 "get_zone_info": false, 00:16:54.232 "zone_management": false, 00:16:54.232 "zone_append": false, 00:16:54.232 "compare": false, 00:16:54.232 "compare_and_write": false, 00:16:54.232 "abort": true, 00:16:54.232 "seek_hole": false, 00:16:54.232 "seek_data": false, 00:16:54.232 "copy": true, 00:16:54.232 "nvme_iov_md": false 00:16:54.232 }, 00:16:54.232 "memory_domains": [ 00:16:54.232 { 00:16:54.232 "dma_device_id": "system", 00:16:54.232 "dma_device_type": 1 00:16:54.232 }, 00:16:54.232 { 00:16:54.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.232 "dma_device_type": 2 00:16:54.232 } 00:16:54.232 ], 00:16:54.232 "driver_specific": {} 00:16:54.232 } 00:16:54.232 ] 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.232 09:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.232 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.232 "name": "Existed_Raid", 00:16:54.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.232 "strip_size_kb": 64, 00:16:54.232 "state": "configuring", 00:16:54.232 "raid_level": "concat", 00:16:54.232 "superblock": false, 00:16:54.232 "num_base_bdevs": 4, 00:16:54.232 "num_base_bdevs_discovered": 3, 00:16:54.232 "num_base_bdevs_operational": 4, 00:16:54.232 "base_bdevs_list": [ 00:16:54.232 { 00:16:54.232 "name": "BaseBdev1", 00:16:54.232 "uuid": "d86544d9-f30e-42fe-8380-0ef3bb50dded", 00:16:54.232 "is_configured": true, 00:16:54.232 "data_offset": 0, 00:16:54.232 "data_size": 65536 00:16:54.232 }, 00:16:54.232 { 00:16:54.232 "name": "BaseBdev2", 00:16:54.232 "uuid": "944f9c2b-5ff7-4639-ab9f-6eaf089385e9", 00:16:54.232 "is_configured": true, 00:16:54.232 "data_offset": 0, 00:16:54.232 "data_size": 65536 00:16:54.232 }, 00:16:54.232 { 00:16:54.232 "name": "BaseBdev3", 00:16:54.232 "uuid": "44ab6a8e-ad2f-4706-b9bd-ebb21b341cad", 00:16:54.232 "is_configured": true, 00:16:54.232 "data_offset": 0, 00:16:54.232 "data_size": 65536 00:16:54.232 }, 00:16:54.232 { 00:16:54.232 "name": "BaseBdev4", 00:16:54.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.232 "is_configured": false, 00:16:54.232 "data_offset": 0, 00:16:54.232 "data_size": 0 00:16:54.232 } 00:16:54.232 ] 00:16:54.232 }' 00:16:54.232 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.232 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.800 [2024-10-15 09:16:38.514528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:54.800 [2024-10-15 09:16:38.514801] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:54.800 [2024-10-15 09:16:38.514825] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:54.800 [2024-10-15 09:16:38.515233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:54.800 [2024-10-15 09:16:38.515503] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:54.800 [2024-10-15 09:16:38.515525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:54.800 [2024-10-15 09:16:38.515901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.800 BaseBdev4 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.800 [ 00:16:54.800 { 00:16:54.800 "name": "BaseBdev4", 00:16:54.800 "aliases": [ 00:16:54.800 "cb6ee5de-a43a-436e-b857-1551f7db857f" 00:16:54.800 ], 00:16:54.800 "product_name": "Malloc disk", 00:16:54.800 "block_size": 512, 00:16:54.800 "num_blocks": 65536, 00:16:54.800 "uuid": "cb6ee5de-a43a-436e-b857-1551f7db857f", 00:16:54.800 "assigned_rate_limits": { 00:16:54.800 "rw_ios_per_sec": 0, 00:16:54.800 "rw_mbytes_per_sec": 0, 00:16:54.800 "r_mbytes_per_sec": 0, 00:16:54.800 "w_mbytes_per_sec": 0 00:16:54.800 }, 00:16:54.800 "claimed": true, 00:16:54.800 "claim_type": "exclusive_write", 00:16:54.800 "zoned": false, 00:16:54.800 "supported_io_types": { 00:16:54.800 "read": true, 00:16:54.800 "write": true, 00:16:54.800 "unmap": true, 00:16:54.800 "flush": true, 00:16:54.800 "reset": true, 00:16:54.800 "nvme_admin": false, 00:16:54.800 "nvme_io": false, 00:16:54.800 "nvme_io_md": false, 00:16:54.800 "write_zeroes": true, 00:16:54.800 "zcopy": true, 00:16:54.800 "get_zone_info": false, 00:16:54.800 "zone_management": false, 00:16:54.800 "zone_append": false, 00:16:54.800 "compare": false, 00:16:54.800 "compare_and_write": false, 00:16:54.800 "abort": true, 00:16:54.800 "seek_hole": false, 00:16:54.800 "seek_data": false, 00:16:54.800 "copy": true, 00:16:54.800 "nvme_iov_md": false 00:16:54.800 }, 00:16:54.800 "memory_domains": [ 00:16:54.800 { 00:16:54.800 "dma_device_id": "system", 00:16:54.800 "dma_device_type": 1 00:16:54.800 }, 00:16:54.800 { 00:16:54.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.800 "dma_device_type": 2 00:16:54.800 } 00:16:54.800 ], 00:16:54.800 "driver_specific": {} 00:16:54.800 } 00:16:54.800 ] 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.800 "name": "Existed_Raid", 00:16:54.800 "uuid": "39775e8d-d713-4882-842a-282808a49d4c", 00:16:54.800 "strip_size_kb": 64, 00:16:54.800 "state": "online", 00:16:54.800 "raid_level": "concat", 00:16:54.800 "superblock": false, 00:16:54.800 "num_base_bdevs": 4, 00:16:54.800 "num_base_bdevs_discovered": 4, 00:16:54.800 "num_base_bdevs_operational": 4, 00:16:54.800 "base_bdevs_list": [ 00:16:54.800 { 00:16:54.800 "name": "BaseBdev1", 00:16:54.800 "uuid": "d86544d9-f30e-42fe-8380-0ef3bb50dded", 00:16:54.800 "is_configured": true, 00:16:54.800 "data_offset": 0, 00:16:54.800 "data_size": 65536 00:16:54.800 }, 00:16:54.800 { 00:16:54.800 "name": "BaseBdev2", 00:16:54.800 "uuid": "944f9c2b-5ff7-4639-ab9f-6eaf089385e9", 00:16:54.800 "is_configured": true, 00:16:54.800 "data_offset": 0, 00:16:54.800 "data_size": 65536 00:16:54.800 }, 00:16:54.800 { 00:16:54.800 "name": "BaseBdev3", 00:16:54.800 "uuid": "44ab6a8e-ad2f-4706-b9bd-ebb21b341cad", 00:16:54.800 "is_configured": true, 00:16:54.800 "data_offset": 0, 00:16:54.800 "data_size": 65536 00:16:54.800 }, 00:16:54.800 { 00:16:54.800 "name": "BaseBdev4", 00:16:54.800 "uuid": "cb6ee5de-a43a-436e-b857-1551f7db857f", 00:16:54.800 "is_configured": true, 00:16:54.800 "data_offset": 0, 00:16:54.800 "data_size": 65536 00:16:54.800 } 00:16:54.800 ] 00:16:54.800 }' 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.800 09:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.368 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:55.368 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:55.368 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:55.368 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:55.368 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:55.368 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:55.368 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:55.368 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:55.368 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.368 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.368 [2024-10-15 09:16:39.079279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.368 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.368 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:55.368 "name": "Existed_Raid", 00:16:55.368 "aliases": [ 00:16:55.368 "39775e8d-d713-4882-842a-282808a49d4c" 00:16:55.368 ], 00:16:55.368 "product_name": "Raid Volume", 00:16:55.368 "block_size": 512, 00:16:55.368 "num_blocks": 262144, 00:16:55.368 "uuid": "39775e8d-d713-4882-842a-282808a49d4c", 00:16:55.368 "assigned_rate_limits": { 00:16:55.368 "rw_ios_per_sec": 0, 00:16:55.368 "rw_mbytes_per_sec": 0, 00:16:55.368 "r_mbytes_per_sec": 0, 00:16:55.368 "w_mbytes_per_sec": 0 00:16:55.368 }, 00:16:55.368 "claimed": false, 00:16:55.368 "zoned": false, 00:16:55.369 "supported_io_types": { 00:16:55.369 "read": true, 00:16:55.369 "write": true, 00:16:55.369 "unmap": true, 00:16:55.369 "flush": true, 00:16:55.369 "reset": true, 00:16:55.369 "nvme_admin": false, 00:16:55.369 "nvme_io": false, 00:16:55.369 "nvme_io_md": false, 00:16:55.369 "write_zeroes": true, 00:16:55.369 "zcopy": false, 00:16:55.369 "get_zone_info": false, 00:16:55.369 "zone_management": false, 00:16:55.369 "zone_append": false, 00:16:55.369 "compare": false, 00:16:55.369 "compare_and_write": false, 00:16:55.369 "abort": false, 00:16:55.369 "seek_hole": false, 00:16:55.369 "seek_data": false, 00:16:55.369 "copy": false, 00:16:55.369 "nvme_iov_md": false 00:16:55.369 }, 00:16:55.369 "memory_domains": [ 00:16:55.369 { 00:16:55.369 "dma_device_id": "system", 00:16:55.369 "dma_device_type": 1 00:16:55.369 }, 00:16:55.369 { 00:16:55.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.369 "dma_device_type": 2 00:16:55.369 }, 00:16:55.369 { 00:16:55.369 "dma_device_id": "system", 00:16:55.369 "dma_device_type": 1 00:16:55.369 }, 00:16:55.369 { 00:16:55.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.369 "dma_device_type": 2 00:16:55.369 }, 00:16:55.369 { 00:16:55.369 "dma_device_id": "system", 00:16:55.369 "dma_device_type": 1 00:16:55.369 }, 00:16:55.369 { 00:16:55.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.369 "dma_device_type": 2 00:16:55.369 }, 00:16:55.369 { 00:16:55.369 "dma_device_id": "system", 00:16:55.369 "dma_device_type": 1 00:16:55.369 }, 00:16:55.369 { 00:16:55.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.369 "dma_device_type": 2 00:16:55.369 } 00:16:55.369 ], 00:16:55.369 "driver_specific": { 00:16:55.369 "raid": { 00:16:55.369 "uuid": "39775e8d-d713-4882-842a-282808a49d4c", 00:16:55.369 "strip_size_kb": 64, 00:16:55.369 "state": "online", 00:16:55.369 "raid_level": "concat", 00:16:55.369 "superblock": false, 00:16:55.369 "num_base_bdevs": 4, 00:16:55.369 "num_base_bdevs_discovered": 4, 00:16:55.369 "num_base_bdevs_operational": 4, 00:16:55.369 "base_bdevs_list": [ 00:16:55.369 { 00:16:55.369 "name": "BaseBdev1", 00:16:55.369 "uuid": "d86544d9-f30e-42fe-8380-0ef3bb50dded", 00:16:55.369 "is_configured": true, 00:16:55.369 "data_offset": 0, 00:16:55.369 "data_size": 65536 00:16:55.369 }, 00:16:55.369 { 00:16:55.369 "name": "BaseBdev2", 00:16:55.369 "uuid": "944f9c2b-5ff7-4639-ab9f-6eaf089385e9", 00:16:55.369 "is_configured": true, 00:16:55.369 "data_offset": 0, 00:16:55.369 "data_size": 65536 00:16:55.369 }, 00:16:55.369 { 00:16:55.369 "name": "BaseBdev3", 00:16:55.369 "uuid": "44ab6a8e-ad2f-4706-b9bd-ebb21b341cad", 00:16:55.369 "is_configured": true, 00:16:55.369 "data_offset": 0, 00:16:55.369 "data_size": 65536 00:16:55.369 }, 00:16:55.369 { 00:16:55.369 "name": "BaseBdev4", 00:16:55.369 "uuid": "cb6ee5de-a43a-436e-b857-1551f7db857f", 00:16:55.369 "is_configured": true, 00:16:55.369 "data_offset": 0, 00:16:55.369 "data_size": 65536 00:16:55.369 } 00:16:55.369 ] 00:16:55.369 } 00:16:55.369 } 00:16:55.369 }' 00:16:55.369 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:55.369 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:55.369 BaseBdev2 00:16:55.369 BaseBdev3 00:16:55.369 BaseBdev4' 00:16:55.369 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.369 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:55.369 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:55.369 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:55.369 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.369 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.369 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.369 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.369 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:55.369 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:55.369 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:55.369 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:55.369 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.369 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.369 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.628 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.628 [2024-10-15 09:16:39.474978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:55.628 [2024-10-15 09:16:39.475215] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:55.628 [2024-10-15 09:16:39.475314] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.887 "name": "Existed_Raid", 00:16:55.887 "uuid": "39775e8d-d713-4882-842a-282808a49d4c", 00:16:55.887 "strip_size_kb": 64, 00:16:55.887 "state": "offline", 00:16:55.887 "raid_level": "concat", 00:16:55.887 "superblock": false, 00:16:55.887 "num_base_bdevs": 4, 00:16:55.887 "num_base_bdevs_discovered": 3, 00:16:55.887 "num_base_bdevs_operational": 3, 00:16:55.887 "base_bdevs_list": [ 00:16:55.887 { 00:16:55.887 "name": null, 00:16:55.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.887 "is_configured": false, 00:16:55.887 "data_offset": 0, 00:16:55.887 "data_size": 65536 00:16:55.887 }, 00:16:55.887 { 00:16:55.887 "name": "BaseBdev2", 00:16:55.887 "uuid": "944f9c2b-5ff7-4639-ab9f-6eaf089385e9", 00:16:55.887 "is_configured": true, 00:16:55.887 "data_offset": 0, 00:16:55.887 "data_size": 65536 00:16:55.887 }, 00:16:55.887 { 00:16:55.887 "name": "BaseBdev3", 00:16:55.887 "uuid": "44ab6a8e-ad2f-4706-b9bd-ebb21b341cad", 00:16:55.887 "is_configured": true, 00:16:55.887 "data_offset": 0, 00:16:55.887 "data_size": 65536 00:16:55.887 }, 00:16:55.887 { 00:16:55.887 "name": "BaseBdev4", 00:16:55.887 "uuid": "cb6ee5de-a43a-436e-b857-1551f7db857f", 00:16:55.887 "is_configured": true, 00:16:55.887 "data_offset": 0, 00:16:55.887 "data_size": 65536 00:16:55.887 } 00:16:55.887 ] 00:16:55.887 }' 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.887 09:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.455 [2024-10-15 09:16:40.167348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.455 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.455 [2024-10-15 09:16:40.350705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.714 [2024-10-15 09:16:40.506532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:56.714 [2024-10-15 09:16:40.506751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.714 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.715 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:56.715 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.974 BaseBdev2 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.974 [ 00:16:56.974 { 00:16:56.974 "name": "BaseBdev2", 00:16:56.974 "aliases": [ 00:16:56.974 "bf776c8a-9f7c-4edd-a19a-22192c73e816" 00:16:56.974 ], 00:16:56.974 "product_name": "Malloc disk", 00:16:56.974 "block_size": 512, 00:16:56.974 "num_blocks": 65536, 00:16:56.974 "uuid": "bf776c8a-9f7c-4edd-a19a-22192c73e816", 00:16:56.974 "assigned_rate_limits": { 00:16:56.974 "rw_ios_per_sec": 0, 00:16:56.974 "rw_mbytes_per_sec": 0, 00:16:56.974 "r_mbytes_per_sec": 0, 00:16:56.974 "w_mbytes_per_sec": 0 00:16:56.974 }, 00:16:56.974 "claimed": false, 00:16:56.974 "zoned": false, 00:16:56.974 "supported_io_types": { 00:16:56.974 "read": true, 00:16:56.974 "write": true, 00:16:56.974 "unmap": true, 00:16:56.974 "flush": true, 00:16:56.974 "reset": true, 00:16:56.974 "nvme_admin": false, 00:16:56.974 "nvme_io": false, 00:16:56.974 "nvme_io_md": false, 00:16:56.974 "write_zeroes": true, 00:16:56.974 "zcopy": true, 00:16:56.974 "get_zone_info": false, 00:16:56.974 "zone_management": false, 00:16:56.974 "zone_append": false, 00:16:56.974 "compare": false, 00:16:56.974 "compare_and_write": false, 00:16:56.974 "abort": true, 00:16:56.974 "seek_hole": false, 00:16:56.974 "seek_data": false, 00:16:56.974 "copy": true, 00:16:56.974 "nvme_iov_md": false 00:16:56.974 }, 00:16:56.974 "memory_domains": [ 00:16:56.974 { 00:16:56.974 "dma_device_id": "system", 00:16:56.974 "dma_device_type": 1 00:16:56.974 }, 00:16:56.974 { 00:16:56.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.974 "dma_device_type": 2 00:16:56.974 } 00:16:56.974 ], 00:16:56.974 "driver_specific": {} 00:16:56.974 } 00:16:56.974 ] 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.974 BaseBdev3 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.974 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.975 [ 00:16:56.975 { 00:16:56.975 "name": "BaseBdev3", 00:16:56.975 "aliases": [ 00:16:56.975 "9426f031-f277-4e50-95dd-387d2241eed9" 00:16:56.975 ], 00:16:56.975 "product_name": "Malloc disk", 00:16:56.975 "block_size": 512, 00:16:56.975 "num_blocks": 65536, 00:16:56.975 "uuid": "9426f031-f277-4e50-95dd-387d2241eed9", 00:16:56.975 "assigned_rate_limits": { 00:16:56.975 "rw_ios_per_sec": 0, 00:16:56.975 "rw_mbytes_per_sec": 0, 00:16:56.975 "r_mbytes_per_sec": 0, 00:16:56.975 "w_mbytes_per_sec": 0 00:16:56.975 }, 00:16:56.975 "claimed": false, 00:16:56.975 "zoned": false, 00:16:56.975 "supported_io_types": { 00:16:56.975 "read": true, 00:16:56.975 "write": true, 00:16:56.975 "unmap": true, 00:16:56.975 "flush": true, 00:16:56.975 "reset": true, 00:16:56.975 "nvme_admin": false, 00:16:56.975 "nvme_io": false, 00:16:56.975 "nvme_io_md": false, 00:16:56.975 "write_zeroes": true, 00:16:56.975 "zcopy": true, 00:16:56.975 "get_zone_info": false, 00:16:56.975 "zone_management": false, 00:16:56.975 "zone_append": false, 00:16:56.975 "compare": false, 00:16:56.975 "compare_and_write": false, 00:16:56.975 "abort": true, 00:16:56.975 "seek_hole": false, 00:16:56.975 "seek_data": false, 00:16:56.975 "copy": true, 00:16:56.975 "nvme_iov_md": false 00:16:56.975 }, 00:16:56.975 "memory_domains": [ 00:16:56.975 { 00:16:56.975 "dma_device_id": "system", 00:16:56.975 "dma_device_type": 1 00:16:56.975 }, 00:16:56.975 { 00:16:56.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.975 "dma_device_type": 2 00:16:56.975 } 00:16:56.975 ], 00:16:56.975 "driver_specific": {} 00:16:56.975 } 00:16:56.975 ] 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.975 BaseBdev4 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.975 [ 00:16:56.975 { 00:16:56.975 "name": "BaseBdev4", 00:16:56.975 "aliases": [ 00:16:56.975 "0812a14c-397a-484b-9bdc-1fa0c956c8dc" 00:16:56.975 ], 00:16:56.975 "product_name": "Malloc disk", 00:16:56.975 "block_size": 512, 00:16:56.975 "num_blocks": 65536, 00:16:56.975 "uuid": "0812a14c-397a-484b-9bdc-1fa0c956c8dc", 00:16:56.975 "assigned_rate_limits": { 00:16:56.975 "rw_ios_per_sec": 0, 00:16:56.975 "rw_mbytes_per_sec": 0, 00:16:56.975 "r_mbytes_per_sec": 0, 00:16:56.975 "w_mbytes_per_sec": 0 00:16:56.975 }, 00:16:56.975 "claimed": false, 00:16:56.975 "zoned": false, 00:16:56.975 "supported_io_types": { 00:16:56.975 "read": true, 00:16:56.975 "write": true, 00:16:56.975 "unmap": true, 00:16:56.975 "flush": true, 00:16:56.975 "reset": true, 00:16:56.975 "nvme_admin": false, 00:16:56.975 "nvme_io": false, 00:16:56.975 "nvme_io_md": false, 00:16:56.975 "write_zeroes": true, 00:16:56.975 "zcopy": true, 00:16:56.975 "get_zone_info": false, 00:16:56.975 "zone_management": false, 00:16:56.975 "zone_append": false, 00:16:56.975 "compare": false, 00:16:56.975 "compare_and_write": false, 00:16:56.975 "abort": true, 00:16:56.975 "seek_hole": false, 00:16:56.975 "seek_data": false, 00:16:56.975 "copy": true, 00:16:56.975 "nvme_iov_md": false 00:16:56.975 }, 00:16:56.975 "memory_domains": [ 00:16:56.975 { 00:16:56.975 "dma_device_id": "system", 00:16:56.975 "dma_device_type": 1 00:16:56.975 }, 00:16:56.975 { 00:16:56.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.975 "dma_device_type": 2 00:16:56.975 } 00:16:56.975 ], 00:16:56.975 "driver_specific": {} 00:16:56.975 } 00:16:56.975 ] 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.975 [2024-10-15 09:16:40.892338] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:56.975 [2024-10-15 09:16:40.892549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:56.975 [2024-10-15 09:16:40.892682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:56.975 [2024-10-15 09:16:40.895399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:56.975 [2024-10-15 09:16:40.895482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.975 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.235 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.235 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.235 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.235 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.235 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.235 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.235 "name": "Existed_Raid", 00:16:57.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.235 "strip_size_kb": 64, 00:16:57.235 "state": "configuring", 00:16:57.235 "raid_level": "concat", 00:16:57.235 "superblock": false, 00:16:57.235 "num_base_bdevs": 4, 00:16:57.235 "num_base_bdevs_discovered": 3, 00:16:57.235 "num_base_bdevs_operational": 4, 00:16:57.235 "base_bdevs_list": [ 00:16:57.235 { 00:16:57.235 "name": "BaseBdev1", 00:16:57.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.235 "is_configured": false, 00:16:57.235 "data_offset": 0, 00:16:57.235 "data_size": 0 00:16:57.235 }, 00:16:57.235 { 00:16:57.235 "name": "BaseBdev2", 00:16:57.235 "uuid": "bf776c8a-9f7c-4edd-a19a-22192c73e816", 00:16:57.235 "is_configured": true, 00:16:57.235 "data_offset": 0, 00:16:57.235 "data_size": 65536 00:16:57.235 }, 00:16:57.235 { 00:16:57.235 "name": "BaseBdev3", 00:16:57.235 "uuid": "9426f031-f277-4e50-95dd-387d2241eed9", 00:16:57.235 "is_configured": true, 00:16:57.235 "data_offset": 0, 00:16:57.235 "data_size": 65536 00:16:57.235 }, 00:16:57.235 { 00:16:57.235 "name": "BaseBdev4", 00:16:57.235 "uuid": "0812a14c-397a-484b-9bdc-1fa0c956c8dc", 00:16:57.235 "is_configured": true, 00:16:57.235 "data_offset": 0, 00:16:57.235 "data_size": 65536 00:16:57.235 } 00:16:57.235 ] 00:16:57.235 }' 00:16:57.235 09:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.235 09:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.803 [2024-10-15 09:16:41.432679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.803 "name": "Existed_Raid", 00:16:57.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.803 "strip_size_kb": 64, 00:16:57.803 "state": "configuring", 00:16:57.803 "raid_level": "concat", 00:16:57.803 "superblock": false, 00:16:57.803 "num_base_bdevs": 4, 00:16:57.803 "num_base_bdevs_discovered": 2, 00:16:57.803 "num_base_bdevs_operational": 4, 00:16:57.803 "base_bdevs_list": [ 00:16:57.803 { 00:16:57.803 "name": "BaseBdev1", 00:16:57.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.803 "is_configured": false, 00:16:57.803 "data_offset": 0, 00:16:57.803 "data_size": 0 00:16:57.803 }, 00:16:57.803 { 00:16:57.803 "name": null, 00:16:57.803 "uuid": "bf776c8a-9f7c-4edd-a19a-22192c73e816", 00:16:57.803 "is_configured": false, 00:16:57.803 "data_offset": 0, 00:16:57.803 "data_size": 65536 00:16:57.803 }, 00:16:57.803 { 00:16:57.803 "name": "BaseBdev3", 00:16:57.803 "uuid": "9426f031-f277-4e50-95dd-387d2241eed9", 00:16:57.803 "is_configured": true, 00:16:57.803 "data_offset": 0, 00:16:57.803 "data_size": 65536 00:16:57.803 }, 00:16:57.803 { 00:16:57.803 "name": "BaseBdev4", 00:16:57.803 "uuid": "0812a14c-397a-484b-9bdc-1fa0c956c8dc", 00:16:57.803 "is_configured": true, 00:16:57.803 "data_offset": 0, 00:16:57.803 "data_size": 65536 00:16:57.803 } 00:16:57.803 ] 00:16:57.803 }' 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.803 09:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.100 09:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.100 09:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.100 09:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.100 09:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:58.100 09:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.382 [2024-10-15 09:16:42.062696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.382 BaseBdev1 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.382 [ 00:16:58.382 { 00:16:58.382 "name": "BaseBdev1", 00:16:58.382 "aliases": [ 00:16:58.382 "3112a907-bb5b-4c7b-85fb-e5808ffe0293" 00:16:58.382 ], 00:16:58.382 "product_name": "Malloc disk", 00:16:58.382 "block_size": 512, 00:16:58.382 "num_blocks": 65536, 00:16:58.382 "uuid": "3112a907-bb5b-4c7b-85fb-e5808ffe0293", 00:16:58.382 "assigned_rate_limits": { 00:16:58.382 "rw_ios_per_sec": 0, 00:16:58.382 "rw_mbytes_per_sec": 0, 00:16:58.382 "r_mbytes_per_sec": 0, 00:16:58.382 "w_mbytes_per_sec": 0 00:16:58.382 }, 00:16:58.382 "claimed": true, 00:16:58.382 "claim_type": "exclusive_write", 00:16:58.382 "zoned": false, 00:16:58.382 "supported_io_types": { 00:16:58.382 "read": true, 00:16:58.382 "write": true, 00:16:58.382 "unmap": true, 00:16:58.382 "flush": true, 00:16:58.382 "reset": true, 00:16:58.382 "nvme_admin": false, 00:16:58.382 "nvme_io": false, 00:16:58.382 "nvme_io_md": false, 00:16:58.382 "write_zeroes": true, 00:16:58.382 "zcopy": true, 00:16:58.382 "get_zone_info": false, 00:16:58.382 "zone_management": false, 00:16:58.382 "zone_append": false, 00:16:58.382 "compare": false, 00:16:58.382 "compare_and_write": false, 00:16:58.382 "abort": true, 00:16:58.382 "seek_hole": false, 00:16:58.382 "seek_data": false, 00:16:58.382 "copy": true, 00:16:58.382 "nvme_iov_md": false 00:16:58.382 }, 00:16:58.382 "memory_domains": [ 00:16:58.382 { 00:16:58.382 "dma_device_id": "system", 00:16:58.382 "dma_device_type": 1 00:16:58.382 }, 00:16:58.382 { 00:16:58.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.382 "dma_device_type": 2 00:16:58.382 } 00:16:58.382 ], 00:16:58.382 "driver_specific": {} 00:16:58.382 } 00:16:58.382 ] 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.382 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.382 "name": "Existed_Raid", 00:16:58.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.382 "strip_size_kb": 64, 00:16:58.382 "state": "configuring", 00:16:58.382 "raid_level": "concat", 00:16:58.382 "superblock": false, 00:16:58.382 "num_base_bdevs": 4, 00:16:58.382 "num_base_bdevs_discovered": 3, 00:16:58.382 "num_base_bdevs_operational": 4, 00:16:58.382 "base_bdevs_list": [ 00:16:58.382 { 00:16:58.382 "name": "BaseBdev1", 00:16:58.382 "uuid": "3112a907-bb5b-4c7b-85fb-e5808ffe0293", 00:16:58.382 "is_configured": true, 00:16:58.382 "data_offset": 0, 00:16:58.382 "data_size": 65536 00:16:58.382 }, 00:16:58.382 { 00:16:58.382 "name": null, 00:16:58.382 "uuid": "bf776c8a-9f7c-4edd-a19a-22192c73e816", 00:16:58.382 "is_configured": false, 00:16:58.382 "data_offset": 0, 00:16:58.382 "data_size": 65536 00:16:58.382 }, 00:16:58.382 { 00:16:58.382 "name": "BaseBdev3", 00:16:58.382 "uuid": "9426f031-f277-4e50-95dd-387d2241eed9", 00:16:58.382 "is_configured": true, 00:16:58.383 "data_offset": 0, 00:16:58.383 "data_size": 65536 00:16:58.383 }, 00:16:58.383 { 00:16:58.383 "name": "BaseBdev4", 00:16:58.383 "uuid": "0812a14c-397a-484b-9bdc-1fa0c956c8dc", 00:16:58.383 "is_configured": true, 00:16:58.383 "data_offset": 0, 00:16:58.383 "data_size": 65536 00:16:58.383 } 00:16:58.383 ] 00:16:58.383 }' 00:16:58.383 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.383 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.950 [2024-10-15 09:16:42.668369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.950 "name": "Existed_Raid", 00:16:58.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.950 "strip_size_kb": 64, 00:16:58.950 "state": "configuring", 00:16:58.950 "raid_level": "concat", 00:16:58.950 "superblock": false, 00:16:58.950 "num_base_bdevs": 4, 00:16:58.950 "num_base_bdevs_discovered": 2, 00:16:58.950 "num_base_bdevs_operational": 4, 00:16:58.950 "base_bdevs_list": [ 00:16:58.950 { 00:16:58.950 "name": "BaseBdev1", 00:16:58.950 "uuid": "3112a907-bb5b-4c7b-85fb-e5808ffe0293", 00:16:58.950 "is_configured": true, 00:16:58.950 "data_offset": 0, 00:16:58.950 "data_size": 65536 00:16:58.950 }, 00:16:58.950 { 00:16:58.950 "name": null, 00:16:58.950 "uuid": "bf776c8a-9f7c-4edd-a19a-22192c73e816", 00:16:58.950 "is_configured": false, 00:16:58.950 "data_offset": 0, 00:16:58.950 "data_size": 65536 00:16:58.950 }, 00:16:58.950 { 00:16:58.950 "name": null, 00:16:58.950 "uuid": "9426f031-f277-4e50-95dd-387d2241eed9", 00:16:58.950 "is_configured": false, 00:16:58.950 "data_offset": 0, 00:16:58.950 "data_size": 65536 00:16:58.950 }, 00:16:58.950 { 00:16:58.950 "name": "BaseBdev4", 00:16:58.950 "uuid": "0812a14c-397a-484b-9bdc-1fa0c956c8dc", 00:16:58.950 "is_configured": true, 00:16:58.950 "data_offset": 0, 00:16:58.950 "data_size": 65536 00:16:58.950 } 00:16:58.950 ] 00:16:58.950 }' 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.950 09:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.516 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.516 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.516 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.516 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:59.516 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.516 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:59.516 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:59.516 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.516 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.516 [2024-10-15 09:16:43.264647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:59.516 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.516 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:59.516 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:59.516 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.517 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:59.517 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.517 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.517 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.517 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.517 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.517 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.517 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.517 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.517 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.517 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.517 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.517 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.517 "name": "Existed_Raid", 00:16:59.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.517 "strip_size_kb": 64, 00:16:59.517 "state": "configuring", 00:16:59.517 "raid_level": "concat", 00:16:59.517 "superblock": false, 00:16:59.517 "num_base_bdevs": 4, 00:16:59.517 "num_base_bdevs_discovered": 3, 00:16:59.517 "num_base_bdevs_operational": 4, 00:16:59.517 "base_bdevs_list": [ 00:16:59.517 { 00:16:59.517 "name": "BaseBdev1", 00:16:59.517 "uuid": "3112a907-bb5b-4c7b-85fb-e5808ffe0293", 00:16:59.517 "is_configured": true, 00:16:59.517 "data_offset": 0, 00:16:59.517 "data_size": 65536 00:16:59.517 }, 00:16:59.517 { 00:16:59.517 "name": null, 00:16:59.517 "uuid": "bf776c8a-9f7c-4edd-a19a-22192c73e816", 00:16:59.517 "is_configured": false, 00:16:59.517 "data_offset": 0, 00:16:59.517 "data_size": 65536 00:16:59.517 }, 00:16:59.517 { 00:16:59.517 "name": "BaseBdev3", 00:16:59.517 "uuid": "9426f031-f277-4e50-95dd-387d2241eed9", 00:16:59.517 "is_configured": true, 00:16:59.517 "data_offset": 0, 00:16:59.517 "data_size": 65536 00:16:59.517 }, 00:16:59.517 { 00:16:59.517 "name": "BaseBdev4", 00:16:59.517 "uuid": "0812a14c-397a-484b-9bdc-1fa0c956c8dc", 00:16:59.517 "is_configured": true, 00:16:59.517 "data_offset": 0, 00:16:59.517 "data_size": 65536 00:16:59.517 } 00:16:59.517 ] 00:16:59.517 }' 00:16:59.517 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.517 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.084 [2024-10-15 09:16:43.836914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.084 "name": "Existed_Raid", 00:17:00.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.084 "strip_size_kb": 64, 00:17:00.084 "state": "configuring", 00:17:00.084 "raid_level": "concat", 00:17:00.084 "superblock": false, 00:17:00.084 "num_base_bdevs": 4, 00:17:00.084 "num_base_bdevs_discovered": 2, 00:17:00.084 "num_base_bdevs_operational": 4, 00:17:00.084 "base_bdevs_list": [ 00:17:00.084 { 00:17:00.084 "name": null, 00:17:00.084 "uuid": "3112a907-bb5b-4c7b-85fb-e5808ffe0293", 00:17:00.084 "is_configured": false, 00:17:00.084 "data_offset": 0, 00:17:00.084 "data_size": 65536 00:17:00.084 }, 00:17:00.084 { 00:17:00.084 "name": null, 00:17:00.084 "uuid": "bf776c8a-9f7c-4edd-a19a-22192c73e816", 00:17:00.084 "is_configured": false, 00:17:00.084 "data_offset": 0, 00:17:00.084 "data_size": 65536 00:17:00.084 }, 00:17:00.084 { 00:17:00.084 "name": "BaseBdev3", 00:17:00.084 "uuid": "9426f031-f277-4e50-95dd-387d2241eed9", 00:17:00.084 "is_configured": true, 00:17:00.084 "data_offset": 0, 00:17:00.084 "data_size": 65536 00:17:00.084 }, 00:17:00.084 { 00:17:00.084 "name": "BaseBdev4", 00:17:00.084 "uuid": "0812a14c-397a-484b-9bdc-1fa0c956c8dc", 00:17:00.084 "is_configured": true, 00:17:00.084 "data_offset": 0, 00:17:00.084 "data_size": 65536 00:17:00.084 } 00:17:00.084 ] 00:17:00.084 }' 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.084 09:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.651 [2024-10-15 09:16:44.507962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.651 "name": "Existed_Raid", 00:17:00.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.651 "strip_size_kb": 64, 00:17:00.651 "state": "configuring", 00:17:00.651 "raid_level": "concat", 00:17:00.651 "superblock": false, 00:17:00.651 "num_base_bdevs": 4, 00:17:00.651 "num_base_bdevs_discovered": 3, 00:17:00.651 "num_base_bdevs_operational": 4, 00:17:00.651 "base_bdevs_list": [ 00:17:00.651 { 00:17:00.651 "name": null, 00:17:00.651 "uuid": "3112a907-bb5b-4c7b-85fb-e5808ffe0293", 00:17:00.651 "is_configured": false, 00:17:00.651 "data_offset": 0, 00:17:00.651 "data_size": 65536 00:17:00.651 }, 00:17:00.651 { 00:17:00.651 "name": "BaseBdev2", 00:17:00.651 "uuid": "bf776c8a-9f7c-4edd-a19a-22192c73e816", 00:17:00.651 "is_configured": true, 00:17:00.651 "data_offset": 0, 00:17:00.651 "data_size": 65536 00:17:00.651 }, 00:17:00.651 { 00:17:00.651 "name": "BaseBdev3", 00:17:00.651 "uuid": "9426f031-f277-4e50-95dd-387d2241eed9", 00:17:00.651 "is_configured": true, 00:17:00.651 "data_offset": 0, 00:17:00.651 "data_size": 65536 00:17:00.651 }, 00:17:00.651 { 00:17:00.651 "name": "BaseBdev4", 00:17:00.651 "uuid": "0812a14c-397a-484b-9bdc-1fa0c956c8dc", 00:17:00.651 "is_configured": true, 00:17:00.651 "data_offset": 0, 00:17:00.651 "data_size": 65536 00:17:00.651 } 00:17:00.651 ] 00:17:00.651 }' 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.651 09:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.219 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.219 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.219 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.219 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:01.219 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.219 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:01.219 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.219 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.219 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.219 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:01.219 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3112a907-bb5b-4c7b-85fb-e5808ffe0293 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.478 [2024-10-15 09:16:45.220168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:01.478 [2024-10-15 09:16:45.220241] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:01.478 [2024-10-15 09:16:45.220269] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:01.478 [2024-10-15 09:16:45.220613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:01.478 [2024-10-15 09:16:45.220817] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:01.478 [2024-10-15 09:16:45.220838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:01.478 [2024-10-15 09:16:45.221206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.478 NewBaseBdev 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.478 [ 00:17:01.478 { 00:17:01.478 "name": "NewBaseBdev", 00:17:01.478 "aliases": [ 00:17:01.478 "3112a907-bb5b-4c7b-85fb-e5808ffe0293" 00:17:01.478 ], 00:17:01.478 "product_name": "Malloc disk", 00:17:01.478 "block_size": 512, 00:17:01.478 "num_blocks": 65536, 00:17:01.478 "uuid": "3112a907-bb5b-4c7b-85fb-e5808ffe0293", 00:17:01.478 "assigned_rate_limits": { 00:17:01.478 "rw_ios_per_sec": 0, 00:17:01.478 "rw_mbytes_per_sec": 0, 00:17:01.478 "r_mbytes_per_sec": 0, 00:17:01.478 "w_mbytes_per_sec": 0 00:17:01.478 }, 00:17:01.478 "claimed": true, 00:17:01.478 "claim_type": "exclusive_write", 00:17:01.478 "zoned": false, 00:17:01.478 "supported_io_types": { 00:17:01.478 "read": true, 00:17:01.478 "write": true, 00:17:01.478 "unmap": true, 00:17:01.478 "flush": true, 00:17:01.478 "reset": true, 00:17:01.478 "nvme_admin": false, 00:17:01.478 "nvme_io": false, 00:17:01.478 "nvme_io_md": false, 00:17:01.478 "write_zeroes": true, 00:17:01.478 "zcopy": true, 00:17:01.478 "get_zone_info": false, 00:17:01.478 "zone_management": false, 00:17:01.478 "zone_append": false, 00:17:01.478 "compare": false, 00:17:01.478 "compare_and_write": false, 00:17:01.478 "abort": true, 00:17:01.478 "seek_hole": false, 00:17:01.478 "seek_data": false, 00:17:01.478 "copy": true, 00:17:01.478 "nvme_iov_md": false 00:17:01.478 }, 00:17:01.478 "memory_domains": [ 00:17:01.478 { 00:17:01.478 "dma_device_id": "system", 00:17:01.478 "dma_device_type": 1 00:17:01.478 }, 00:17:01.478 { 00:17:01.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.478 "dma_device_type": 2 00:17:01.478 } 00:17:01.478 ], 00:17:01.478 "driver_specific": {} 00:17:01.478 } 00:17:01.478 ] 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.478 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.478 "name": "Existed_Raid", 00:17:01.478 "uuid": "c192ee1e-e556-44f3-b1aa-062a93315403", 00:17:01.478 "strip_size_kb": 64, 00:17:01.478 "state": "online", 00:17:01.478 "raid_level": "concat", 00:17:01.478 "superblock": false, 00:17:01.478 "num_base_bdevs": 4, 00:17:01.478 "num_base_bdevs_discovered": 4, 00:17:01.478 "num_base_bdevs_operational": 4, 00:17:01.478 "base_bdevs_list": [ 00:17:01.478 { 00:17:01.478 "name": "NewBaseBdev", 00:17:01.478 "uuid": "3112a907-bb5b-4c7b-85fb-e5808ffe0293", 00:17:01.478 "is_configured": true, 00:17:01.478 "data_offset": 0, 00:17:01.478 "data_size": 65536 00:17:01.478 }, 00:17:01.478 { 00:17:01.478 "name": "BaseBdev2", 00:17:01.478 "uuid": "bf776c8a-9f7c-4edd-a19a-22192c73e816", 00:17:01.478 "is_configured": true, 00:17:01.478 "data_offset": 0, 00:17:01.478 "data_size": 65536 00:17:01.478 }, 00:17:01.478 { 00:17:01.478 "name": "BaseBdev3", 00:17:01.478 "uuid": "9426f031-f277-4e50-95dd-387d2241eed9", 00:17:01.478 "is_configured": true, 00:17:01.478 "data_offset": 0, 00:17:01.478 "data_size": 65536 00:17:01.478 }, 00:17:01.478 { 00:17:01.479 "name": "BaseBdev4", 00:17:01.479 "uuid": "0812a14c-397a-484b-9bdc-1fa0c956c8dc", 00:17:01.479 "is_configured": true, 00:17:01.479 "data_offset": 0, 00:17:01.479 "data_size": 65536 00:17:01.479 } 00:17:01.479 ] 00:17:01.479 }' 00:17:01.479 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.479 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.046 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:02.046 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:02.046 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:02.046 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:02.046 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:02.046 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:02.046 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:02.046 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:02.046 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.046 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.046 [2024-10-15 09:16:45.796887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.046 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.046 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:02.046 "name": "Existed_Raid", 00:17:02.046 "aliases": [ 00:17:02.046 "c192ee1e-e556-44f3-b1aa-062a93315403" 00:17:02.046 ], 00:17:02.046 "product_name": "Raid Volume", 00:17:02.046 "block_size": 512, 00:17:02.046 "num_blocks": 262144, 00:17:02.046 "uuid": "c192ee1e-e556-44f3-b1aa-062a93315403", 00:17:02.046 "assigned_rate_limits": { 00:17:02.046 "rw_ios_per_sec": 0, 00:17:02.046 "rw_mbytes_per_sec": 0, 00:17:02.046 "r_mbytes_per_sec": 0, 00:17:02.046 "w_mbytes_per_sec": 0 00:17:02.046 }, 00:17:02.046 "claimed": false, 00:17:02.046 "zoned": false, 00:17:02.046 "supported_io_types": { 00:17:02.046 "read": true, 00:17:02.046 "write": true, 00:17:02.046 "unmap": true, 00:17:02.046 "flush": true, 00:17:02.046 "reset": true, 00:17:02.046 "nvme_admin": false, 00:17:02.046 "nvme_io": false, 00:17:02.046 "nvme_io_md": false, 00:17:02.046 "write_zeroes": true, 00:17:02.046 "zcopy": false, 00:17:02.046 "get_zone_info": false, 00:17:02.046 "zone_management": false, 00:17:02.046 "zone_append": false, 00:17:02.046 "compare": false, 00:17:02.046 "compare_and_write": false, 00:17:02.046 "abort": false, 00:17:02.046 "seek_hole": false, 00:17:02.046 "seek_data": false, 00:17:02.046 "copy": false, 00:17:02.046 "nvme_iov_md": false 00:17:02.046 }, 00:17:02.046 "memory_domains": [ 00:17:02.046 { 00:17:02.046 "dma_device_id": "system", 00:17:02.046 "dma_device_type": 1 00:17:02.046 }, 00:17:02.046 { 00:17:02.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.046 "dma_device_type": 2 00:17:02.046 }, 00:17:02.046 { 00:17:02.046 "dma_device_id": "system", 00:17:02.046 "dma_device_type": 1 00:17:02.046 }, 00:17:02.046 { 00:17:02.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.046 "dma_device_type": 2 00:17:02.046 }, 00:17:02.046 { 00:17:02.046 "dma_device_id": "system", 00:17:02.046 "dma_device_type": 1 00:17:02.046 }, 00:17:02.046 { 00:17:02.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.046 "dma_device_type": 2 00:17:02.046 }, 00:17:02.047 { 00:17:02.047 "dma_device_id": "system", 00:17:02.047 "dma_device_type": 1 00:17:02.047 }, 00:17:02.047 { 00:17:02.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.047 "dma_device_type": 2 00:17:02.047 } 00:17:02.047 ], 00:17:02.047 "driver_specific": { 00:17:02.047 "raid": { 00:17:02.047 "uuid": "c192ee1e-e556-44f3-b1aa-062a93315403", 00:17:02.047 "strip_size_kb": 64, 00:17:02.047 "state": "online", 00:17:02.047 "raid_level": "concat", 00:17:02.047 "superblock": false, 00:17:02.047 "num_base_bdevs": 4, 00:17:02.047 "num_base_bdevs_discovered": 4, 00:17:02.047 "num_base_bdevs_operational": 4, 00:17:02.047 "base_bdevs_list": [ 00:17:02.047 { 00:17:02.047 "name": "NewBaseBdev", 00:17:02.047 "uuid": "3112a907-bb5b-4c7b-85fb-e5808ffe0293", 00:17:02.047 "is_configured": true, 00:17:02.047 "data_offset": 0, 00:17:02.047 "data_size": 65536 00:17:02.047 }, 00:17:02.047 { 00:17:02.047 "name": "BaseBdev2", 00:17:02.047 "uuid": "bf776c8a-9f7c-4edd-a19a-22192c73e816", 00:17:02.047 "is_configured": true, 00:17:02.047 "data_offset": 0, 00:17:02.047 "data_size": 65536 00:17:02.047 }, 00:17:02.047 { 00:17:02.047 "name": "BaseBdev3", 00:17:02.047 "uuid": "9426f031-f277-4e50-95dd-387d2241eed9", 00:17:02.047 "is_configured": true, 00:17:02.047 "data_offset": 0, 00:17:02.047 "data_size": 65536 00:17:02.047 }, 00:17:02.047 { 00:17:02.047 "name": "BaseBdev4", 00:17:02.047 "uuid": "0812a14c-397a-484b-9bdc-1fa0c956c8dc", 00:17:02.047 "is_configured": true, 00:17:02.047 "data_offset": 0, 00:17:02.047 "data_size": 65536 00:17:02.047 } 00:17:02.047 ] 00:17:02.047 } 00:17:02.047 } 00:17:02.047 }' 00:17:02.047 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:02.047 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:02.047 BaseBdev2 00:17:02.047 BaseBdev3 00:17:02.047 BaseBdev4' 00:17:02.047 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.047 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:02.047 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.047 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:02.047 09:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.047 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.047 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.047 09:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.308 [2024-10-15 09:16:46.172554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:02.308 [2024-10-15 09:16:46.172613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.308 [2024-10-15 09:16:46.172755] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.308 [2024-10-15 09:16:46.172882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.308 [2024-10-15 09:16:46.172904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71619 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71619 ']' 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71619 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71619 00:17:02.308 killing process with pid 71619 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71619' 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71619 00:17:02.308 [2024-10-15 09:16:46.212463] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.308 09:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71619 00:17:02.876 [2024-10-15 09:16:46.604976] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:04.256 ************************************ 00:17:04.256 END TEST raid_state_function_test 00:17:04.256 ************************************ 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:04.256 00:17:04.256 real 0m13.276s 00:17:04.256 user 0m21.796s 00:17:04.256 sys 0m1.964s 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.256 09:16:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:17:04.256 09:16:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:04.256 09:16:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:04.256 09:16:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:04.256 ************************************ 00:17:04.256 START TEST raid_state_function_test_sb 00:17:04.256 ************************************ 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.256 Process raid pid: 72308 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72308 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72308' 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72308 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72308 ']' 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:04.256 09:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.256 [2024-10-15 09:16:47.952800] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:17:04.256 [2024-10-15 09:16:47.953276] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.256 [2024-10-15 09:16:48.135760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.514 [2024-10-15 09:16:48.287572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.772 [2024-10-15 09:16:48.504445] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.772 [2024-10-15 09:16:48.504815] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.033 [2024-10-15 09:16:48.893715] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.033 [2024-10-15 09:16:48.893985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.033 [2024-10-15 09:16:48.894113] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.033 [2024-10-15 09:16:48.894275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:05.033 [2024-10-15 09:16:48.894394] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:05.033 [2024-10-15 09:16:48.894428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:05.033 [2024-10-15 09:16:48.894454] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:05.033 [2024-10-15 09:16:48.894484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.033 "name": "Existed_Raid", 00:17:05.033 "uuid": "1422b083-cbe4-49bc-b1ad-5724b8b9f7ba", 00:17:05.033 "strip_size_kb": 64, 00:17:05.033 "state": "configuring", 00:17:05.033 "raid_level": "concat", 00:17:05.033 "superblock": true, 00:17:05.033 "num_base_bdevs": 4, 00:17:05.033 "num_base_bdevs_discovered": 0, 00:17:05.033 "num_base_bdevs_operational": 4, 00:17:05.033 "base_bdevs_list": [ 00:17:05.033 { 00:17:05.033 "name": "BaseBdev1", 00:17:05.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.033 "is_configured": false, 00:17:05.033 "data_offset": 0, 00:17:05.033 "data_size": 0 00:17:05.033 }, 00:17:05.033 { 00:17:05.033 "name": "BaseBdev2", 00:17:05.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.033 "is_configured": false, 00:17:05.033 "data_offset": 0, 00:17:05.033 "data_size": 0 00:17:05.033 }, 00:17:05.033 { 00:17:05.033 "name": "BaseBdev3", 00:17:05.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.033 "is_configured": false, 00:17:05.033 "data_offset": 0, 00:17:05.033 "data_size": 0 00:17:05.033 }, 00:17:05.033 { 00:17:05.033 "name": "BaseBdev4", 00:17:05.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.033 "is_configured": false, 00:17:05.033 "data_offset": 0, 00:17:05.033 "data_size": 0 00:17:05.033 } 00:17:05.033 ] 00:17:05.033 }' 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.033 09:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.601 [2024-10-15 09:16:49.445788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:05.601 [2024-10-15 09:16:49.445839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.601 [2024-10-15 09:16:49.453816] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.601 [2024-10-15 09:16:49.453872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.601 [2024-10-15 09:16:49.453889] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.601 [2024-10-15 09:16:49.453905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:05.601 [2024-10-15 09:16:49.453925] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:05.601 [2024-10-15 09:16:49.453942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:05.601 [2024-10-15 09:16:49.453952] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:05.601 [2024-10-15 09:16:49.453967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.601 [2024-10-15 09:16:49.503076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.601 BaseBdev1 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.601 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.601 [ 00:17:05.601 { 00:17:05.601 "name": "BaseBdev1", 00:17:05.601 "aliases": [ 00:17:05.601 "1ff0b965-a188-425d-8a9f-2cf90cb7b9ef" 00:17:05.601 ], 00:17:05.601 "product_name": "Malloc disk", 00:17:05.601 "block_size": 512, 00:17:05.601 "num_blocks": 65536, 00:17:05.601 "uuid": "1ff0b965-a188-425d-8a9f-2cf90cb7b9ef", 00:17:05.601 "assigned_rate_limits": { 00:17:05.601 "rw_ios_per_sec": 0, 00:17:05.601 "rw_mbytes_per_sec": 0, 00:17:05.601 "r_mbytes_per_sec": 0, 00:17:05.601 "w_mbytes_per_sec": 0 00:17:05.601 }, 00:17:05.601 "claimed": true, 00:17:05.859 "claim_type": "exclusive_write", 00:17:05.859 "zoned": false, 00:17:05.859 "supported_io_types": { 00:17:05.859 "read": true, 00:17:05.859 "write": true, 00:17:05.859 "unmap": true, 00:17:05.859 "flush": true, 00:17:05.859 "reset": true, 00:17:05.859 "nvme_admin": false, 00:17:05.859 "nvme_io": false, 00:17:05.859 "nvme_io_md": false, 00:17:05.859 "write_zeroes": true, 00:17:05.859 "zcopy": true, 00:17:05.859 "get_zone_info": false, 00:17:05.859 "zone_management": false, 00:17:05.859 "zone_append": false, 00:17:05.859 "compare": false, 00:17:05.859 "compare_and_write": false, 00:17:05.859 "abort": true, 00:17:05.859 "seek_hole": false, 00:17:05.859 "seek_data": false, 00:17:05.859 "copy": true, 00:17:05.859 "nvme_iov_md": false 00:17:05.859 }, 00:17:05.859 "memory_domains": [ 00:17:05.859 { 00:17:05.859 "dma_device_id": "system", 00:17:05.859 "dma_device_type": 1 00:17:05.859 }, 00:17:05.859 { 00:17:05.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.859 "dma_device_type": 2 00:17:05.859 } 00:17:05.859 ], 00:17:05.859 "driver_specific": {} 00:17:05.859 } 00:17:05.859 ] 00:17:05.859 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.860 "name": "Existed_Raid", 00:17:05.860 "uuid": "6abf1e2d-d2eb-414a-9213-34482ecb0fa2", 00:17:05.860 "strip_size_kb": 64, 00:17:05.860 "state": "configuring", 00:17:05.860 "raid_level": "concat", 00:17:05.860 "superblock": true, 00:17:05.860 "num_base_bdevs": 4, 00:17:05.860 "num_base_bdevs_discovered": 1, 00:17:05.860 "num_base_bdevs_operational": 4, 00:17:05.860 "base_bdevs_list": [ 00:17:05.860 { 00:17:05.860 "name": "BaseBdev1", 00:17:05.860 "uuid": "1ff0b965-a188-425d-8a9f-2cf90cb7b9ef", 00:17:05.860 "is_configured": true, 00:17:05.860 "data_offset": 2048, 00:17:05.860 "data_size": 63488 00:17:05.860 }, 00:17:05.860 { 00:17:05.860 "name": "BaseBdev2", 00:17:05.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.860 "is_configured": false, 00:17:05.860 "data_offset": 0, 00:17:05.860 "data_size": 0 00:17:05.860 }, 00:17:05.860 { 00:17:05.860 "name": "BaseBdev3", 00:17:05.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.860 "is_configured": false, 00:17:05.860 "data_offset": 0, 00:17:05.860 "data_size": 0 00:17:05.860 }, 00:17:05.860 { 00:17:05.860 "name": "BaseBdev4", 00:17:05.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.860 "is_configured": false, 00:17:05.860 "data_offset": 0, 00:17:05.860 "data_size": 0 00:17:05.860 } 00:17:05.860 ] 00:17:05.860 }' 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.860 09:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.428 [2024-10-15 09:16:50.059263] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:06.428 [2024-10-15 09:16:50.059339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.428 [2024-10-15 09:16:50.071339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.428 [2024-10-15 09:16:50.074042] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:06.428 [2024-10-15 09:16:50.074096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:06.428 [2024-10-15 09:16:50.074113] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:06.428 [2024-10-15 09:16:50.074147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:06.428 [2024-10-15 09:16:50.074158] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:06.428 [2024-10-15 09:16:50.074172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.428 "name": "Existed_Raid", 00:17:06.428 "uuid": "1a8862da-ed0a-4907-b10f-8d1891948c22", 00:17:06.428 "strip_size_kb": 64, 00:17:06.428 "state": "configuring", 00:17:06.428 "raid_level": "concat", 00:17:06.428 "superblock": true, 00:17:06.428 "num_base_bdevs": 4, 00:17:06.428 "num_base_bdevs_discovered": 1, 00:17:06.428 "num_base_bdevs_operational": 4, 00:17:06.428 "base_bdevs_list": [ 00:17:06.428 { 00:17:06.428 "name": "BaseBdev1", 00:17:06.428 "uuid": "1ff0b965-a188-425d-8a9f-2cf90cb7b9ef", 00:17:06.428 "is_configured": true, 00:17:06.428 "data_offset": 2048, 00:17:06.428 "data_size": 63488 00:17:06.428 }, 00:17:06.428 { 00:17:06.428 "name": "BaseBdev2", 00:17:06.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.428 "is_configured": false, 00:17:06.428 "data_offset": 0, 00:17:06.428 "data_size": 0 00:17:06.428 }, 00:17:06.428 { 00:17:06.428 "name": "BaseBdev3", 00:17:06.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.428 "is_configured": false, 00:17:06.428 "data_offset": 0, 00:17:06.428 "data_size": 0 00:17:06.428 }, 00:17:06.428 { 00:17:06.428 "name": "BaseBdev4", 00:17:06.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.428 "is_configured": false, 00:17:06.428 "data_offset": 0, 00:17:06.428 "data_size": 0 00:17:06.428 } 00:17:06.428 ] 00:17:06.428 }' 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.428 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.698 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:06.698 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.698 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.958 [2024-10-15 09:16:50.642633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.958 BaseBdev2 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.958 [ 00:17:06.958 { 00:17:06.958 "name": "BaseBdev2", 00:17:06.958 "aliases": [ 00:17:06.958 "3975de03-010d-4cfc-af21-e304151de41d" 00:17:06.958 ], 00:17:06.958 "product_name": "Malloc disk", 00:17:06.958 "block_size": 512, 00:17:06.958 "num_blocks": 65536, 00:17:06.958 "uuid": "3975de03-010d-4cfc-af21-e304151de41d", 00:17:06.958 "assigned_rate_limits": { 00:17:06.958 "rw_ios_per_sec": 0, 00:17:06.958 "rw_mbytes_per_sec": 0, 00:17:06.958 "r_mbytes_per_sec": 0, 00:17:06.958 "w_mbytes_per_sec": 0 00:17:06.958 }, 00:17:06.958 "claimed": true, 00:17:06.958 "claim_type": "exclusive_write", 00:17:06.958 "zoned": false, 00:17:06.958 "supported_io_types": { 00:17:06.958 "read": true, 00:17:06.958 "write": true, 00:17:06.958 "unmap": true, 00:17:06.958 "flush": true, 00:17:06.958 "reset": true, 00:17:06.958 "nvme_admin": false, 00:17:06.958 "nvme_io": false, 00:17:06.958 "nvme_io_md": false, 00:17:06.958 "write_zeroes": true, 00:17:06.958 "zcopy": true, 00:17:06.958 "get_zone_info": false, 00:17:06.958 "zone_management": false, 00:17:06.958 "zone_append": false, 00:17:06.958 "compare": false, 00:17:06.958 "compare_and_write": false, 00:17:06.958 "abort": true, 00:17:06.958 "seek_hole": false, 00:17:06.958 "seek_data": false, 00:17:06.958 "copy": true, 00:17:06.958 "nvme_iov_md": false 00:17:06.958 }, 00:17:06.958 "memory_domains": [ 00:17:06.958 { 00:17:06.958 "dma_device_id": "system", 00:17:06.958 "dma_device_type": 1 00:17:06.958 }, 00:17:06.958 { 00:17:06.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.958 "dma_device_type": 2 00:17:06.958 } 00:17:06.958 ], 00:17:06.958 "driver_specific": {} 00:17:06.958 } 00:17:06.958 ] 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.958 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.959 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.959 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.959 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.959 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.959 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.959 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.959 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.959 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.959 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.959 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.959 "name": "Existed_Raid", 00:17:06.959 "uuid": "1a8862da-ed0a-4907-b10f-8d1891948c22", 00:17:06.959 "strip_size_kb": 64, 00:17:06.959 "state": "configuring", 00:17:06.959 "raid_level": "concat", 00:17:06.959 "superblock": true, 00:17:06.959 "num_base_bdevs": 4, 00:17:06.959 "num_base_bdevs_discovered": 2, 00:17:06.959 "num_base_bdevs_operational": 4, 00:17:06.959 "base_bdevs_list": [ 00:17:06.959 { 00:17:06.959 "name": "BaseBdev1", 00:17:06.959 "uuid": "1ff0b965-a188-425d-8a9f-2cf90cb7b9ef", 00:17:06.959 "is_configured": true, 00:17:06.959 "data_offset": 2048, 00:17:06.959 "data_size": 63488 00:17:06.959 }, 00:17:06.959 { 00:17:06.959 "name": "BaseBdev2", 00:17:06.959 "uuid": "3975de03-010d-4cfc-af21-e304151de41d", 00:17:06.959 "is_configured": true, 00:17:06.959 "data_offset": 2048, 00:17:06.959 "data_size": 63488 00:17:06.959 }, 00:17:06.959 { 00:17:06.959 "name": "BaseBdev3", 00:17:06.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.959 "is_configured": false, 00:17:06.959 "data_offset": 0, 00:17:06.959 "data_size": 0 00:17:06.959 }, 00:17:06.959 { 00:17:06.959 "name": "BaseBdev4", 00:17:06.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.959 "is_configured": false, 00:17:06.959 "data_offset": 0, 00:17:06.959 "data_size": 0 00:17:06.959 } 00:17:06.959 ] 00:17:06.959 }' 00:17:06.959 09:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.959 09:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.527 [2024-10-15 09:16:51.256457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:07.527 BaseBdev3 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.527 [ 00:17:07.527 { 00:17:07.527 "name": "BaseBdev3", 00:17:07.527 "aliases": [ 00:17:07.527 "db48aa4e-12ec-48cf-b5a6-9427f456b4f0" 00:17:07.527 ], 00:17:07.527 "product_name": "Malloc disk", 00:17:07.527 "block_size": 512, 00:17:07.527 "num_blocks": 65536, 00:17:07.527 "uuid": "db48aa4e-12ec-48cf-b5a6-9427f456b4f0", 00:17:07.527 "assigned_rate_limits": { 00:17:07.527 "rw_ios_per_sec": 0, 00:17:07.527 "rw_mbytes_per_sec": 0, 00:17:07.527 "r_mbytes_per_sec": 0, 00:17:07.527 "w_mbytes_per_sec": 0 00:17:07.527 }, 00:17:07.527 "claimed": true, 00:17:07.527 "claim_type": "exclusive_write", 00:17:07.527 "zoned": false, 00:17:07.527 "supported_io_types": { 00:17:07.527 "read": true, 00:17:07.527 "write": true, 00:17:07.527 "unmap": true, 00:17:07.527 "flush": true, 00:17:07.527 "reset": true, 00:17:07.527 "nvme_admin": false, 00:17:07.527 "nvme_io": false, 00:17:07.527 "nvme_io_md": false, 00:17:07.527 "write_zeroes": true, 00:17:07.527 "zcopy": true, 00:17:07.527 "get_zone_info": false, 00:17:07.527 "zone_management": false, 00:17:07.527 "zone_append": false, 00:17:07.527 "compare": false, 00:17:07.527 "compare_and_write": false, 00:17:07.527 "abort": true, 00:17:07.527 "seek_hole": false, 00:17:07.527 "seek_data": false, 00:17:07.527 "copy": true, 00:17:07.527 "nvme_iov_md": false 00:17:07.527 }, 00:17:07.527 "memory_domains": [ 00:17:07.527 { 00:17:07.527 "dma_device_id": "system", 00:17:07.527 "dma_device_type": 1 00:17:07.527 }, 00:17:07.527 { 00:17:07.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.527 "dma_device_type": 2 00:17:07.527 } 00:17:07.527 ], 00:17:07.527 "driver_specific": {} 00:17:07.527 } 00:17:07.527 ] 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.527 "name": "Existed_Raid", 00:17:07.527 "uuid": "1a8862da-ed0a-4907-b10f-8d1891948c22", 00:17:07.527 "strip_size_kb": 64, 00:17:07.527 "state": "configuring", 00:17:07.527 "raid_level": "concat", 00:17:07.527 "superblock": true, 00:17:07.527 "num_base_bdevs": 4, 00:17:07.527 "num_base_bdevs_discovered": 3, 00:17:07.527 "num_base_bdevs_operational": 4, 00:17:07.527 "base_bdevs_list": [ 00:17:07.527 { 00:17:07.527 "name": "BaseBdev1", 00:17:07.527 "uuid": "1ff0b965-a188-425d-8a9f-2cf90cb7b9ef", 00:17:07.527 "is_configured": true, 00:17:07.527 "data_offset": 2048, 00:17:07.527 "data_size": 63488 00:17:07.527 }, 00:17:07.527 { 00:17:07.527 "name": "BaseBdev2", 00:17:07.527 "uuid": "3975de03-010d-4cfc-af21-e304151de41d", 00:17:07.527 "is_configured": true, 00:17:07.527 "data_offset": 2048, 00:17:07.527 "data_size": 63488 00:17:07.527 }, 00:17:07.527 { 00:17:07.527 "name": "BaseBdev3", 00:17:07.527 "uuid": "db48aa4e-12ec-48cf-b5a6-9427f456b4f0", 00:17:07.527 "is_configured": true, 00:17:07.527 "data_offset": 2048, 00:17:07.527 "data_size": 63488 00:17:07.527 }, 00:17:07.527 { 00:17:07.527 "name": "BaseBdev4", 00:17:07.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.527 "is_configured": false, 00:17:07.527 "data_offset": 0, 00:17:07.527 "data_size": 0 00:17:07.527 } 00:17:07.527 ] 00:17:07.527 }' 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.527 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.095 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:08.095 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.095 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.095 [2024-10-15 09:16:51.888115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:08.095 [2024-10-15 09:16:51.888514] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:08.095 [2024-10-15 09:16:51.888534] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:08.095 [2024-10-15 09:16:51.888856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:08.095 BaseBdev4 00:17:08.095 [2024-10-15 09:16:51.889052] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:08.095 [2024-10-15 09:16:51.889074] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:08.095 [2024-10-15 09:16:51.889315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.095 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.095 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:08.095 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:08.095 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:08.095 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:08.095 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:08.095 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:08.095 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:08.095 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.095 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.095 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.095 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:08.095 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.095 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.095 [ 00:17:08.095 { 00:17:08.095 "name": "BaseBdev4", 00:17:08.095 "aliases": [ 00:17:08.095 "2766f558-ecbb-41cf-99e7-b4661b3d4e98" 00:17:08.095 ], 00:17:08.095 "product_name": "Malloc disk", 00:17:08.095 "block_size": 512, 00:17:08.095 "num_blocks": 65536, 00:17:08.095 "uuid": "2766f558-ecbb-41cf-99e7-b4661b3d4e98", 00:17:08.095 "assigned_rate_limits": { 00:17:08.095 "rw_ios_per_sec": 0, 00:17:08.095 "rw_mbytes_per_sec": 0, 00:17:08.095 "r_mbytes_per_sec": 0, 00:17:08.095 "w_mbytes_per_sec": 0 00:17:08.095 }, 00:17:08.095 "claimed": true, 00:17:08.095 "claim_type": "exclusive_write", 00:17:08.095 "zoned": false, 00:17:08.095 "supported_io_types": { 00:17:08.095 "read": true, 00:17:08.095 "write": true, 00:17:08.095 "unmap": true, 00:17:08.095 "flush": true, 00:17:08.095 "reset": true, 00:17:08.095 "nvme_admin": false, 00:17:08.096 "nvme_io": false, 00:17:08.096 "nvme_io_md": false, 00:17:08.096 "write_zeroes": true, 00:17:08.096 "zcopy": true, 00:17:08.096 "get_zone_info": false, 00:17:08.096 "zone_management": false, 00:17:08.096 "zone_append": false, 00:17:08.096 "compare": false, 00:17:08.096 "compare_and_write": false, 00:17:08.096 "abort": true, 00:17:08.096 "seek_hole": false, 00:17:08.096 "seek_data": false, 00:17:08.096 "copy": true, 00:17:08.096 "nvme_iov_md": false 00:17:08.096 }, 00:17:08.096 "memory_domains": [ 00:17:08.096 { 00:17:08.096 "dma_device_id": "system", 00:17:08.096 "dma_device_type": 1 00:17:08.096 }, 00:17:08.096 { 00:17:08.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.096 "dma_device_type": 2 00:17:08.096 } 00:17:08.096 ], 00:17:08.096 "driver_specific": {} 00:17:08.096 } 00:17:08.096 ] 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.096 "name": "Existed_Raid", 00:17:08.096 "uuid": "1a8862da-ed0a-4907-b10f-8d1891948c22", 00:17:08.096 "strip_size_kb": 64, 00:17:08.096 "state": "online", 00:17:08.096 "raid_level": "concat", 00:17:08.096 "superblock": true, 00:17:08.096 "num_base_bdevs": 4, 00:17:08.096 "num_base_bdevs_discovered": 4, 00:17:08.096 "num_base_bdevs_operational": 4, 00:17:08.096 "base_bdevs_list": [ 00:17:08.096 { 00:17:08.096 "name": "BaseBdev1", 00:17:08.096 "uuid": "1ff0b965-a188-425d-8a9f-2cf90cb7b9ef", 00:17:08.096 "is_configured": true, 00:17:08.096 "data_offset": 2048, 00:17:08.096 "data_size": 63488 00:17:08.096 }, 00:17:08.096 { 00:17:08.096 "name": "BaseBdev2", 00:17:08.096 "uuid": "3975de03-010d-4cfc-af21-e304151de41d", 00:17:08.096 "is_configured": true, 00:17:08.096 "data_offset": 2048, 00:17:08.096 "data_size": 63488 00:17:08.096 }, 00:17:08.096 { 00:17:08.096 "name": "BaseBdev3", 00:17:08.096 "uuid": "db48aa4e-12ec-48cf-b5a6-9427f456b4f0", 00:17:08.096 "is_configured": true, 00:17:08.096 "data_offset": 2048, 00:17:08.096 "data_size": 63488 00:17:08.096 }, 00:17:08.096 { 00:17:08.096 "name": "BaseBdev4", 00:17:08.096 "uuid": "2766f558-ecbb-41cf-99e7-b4661b3d4e98", 00:17:08.096 "is_configured": true, 00:17:08.096 "data_offset": 2048, 00:17:08.096 "data_size": 63488 00:17:08.096 } 00:17:08.096 ] 00:17:08.096 }' 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.096 09:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.664 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:08.664 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:08.664 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:08.664 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:08.664 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:08.664 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:08.664 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:08.664 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.664 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.664 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:08.664 [2024-10-15 09:16:52.436858] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.664 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.664 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:08.664 "name": "Existed_Raid", 00:17:08.664 "aliases": [ 00:17:08.664 "1a8862da-ed0a-4907-b10f-8d1891948c22" 00:17:08.664 ], 00:17:08.664 "product_name": "Raid Volume", 00:17:08.664 "block_size": 512, 00:17:08.664 "num_blocks": 253952, 00:17:08.664 "uuid": "1a8862da-ed0a-4907-b10f-8d1891948c22", 00:17:08.664 "assigned_rate_limits": { 00:17:08.664 "rw_ios_per_sec": 0, 00:17:08.664 "rw_mbytes_per_sec": 0, 00:17:08.664 "r_mbytes_per_sec": 0, 00:17:08.664 "w_mbytes_per_sec": 0 00:17:08.664 }, 00:17:08.664 "claimed": false, 00:17:08.664 "zoned": false, 00:17:08.664 "supported_io_types": { 00:17:08.664 "read": true, 00:17:08.664 "write": true, 00:17:08.664 "unmap": true, 00:17:08.664 "flush": true, 00:17:08.664 "reset": true, 00:17:08.664 "nvme_admin": false, 00:17:08.664 "nvme_io": false, 00:17:08.664 "nvme_io_md": false, 00:17:08.664 "write_zeroes": true, 00:17:08.664 "zcopy": false, 00:17:08.664 "get_zone_info": false, 00:17:08.664 "zone_management": false, 00:17:08.664 "zone_append": false, 00:17:08.664 "compare": false, 00:17:08.664 "compare_and_write": false, 00:17:08.664 "abort": false, 00:17:08.664 "seek_hole": false, 00:17:08.664 "seek_data": false, 00:17:08.664 "copy": false, 00:17:08.664 "nvme_iov_md": false 00:17:08.664 }, 00:17:08.664 "memory_domains": [ 00:17:08.664 { 00:17:08.664 "dma_device_id": "system", 00:17:08.664 "dma_device_type": 1 00:17:08.664 }, 00:17:08.664 { 00:17:08.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.664 "dma_device_type": 2 00:17:08.664 }, 00:17:08.664 { 00:17:08.664 "dma_device_id": "system", 00:17:08.664 "dma_device_type": 1 00:17:08.664 }, 00:17:08.664 { 00:17:08.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.664 "dma_device_type": 2 00:17:08.664 }, 00:17:08.664 { 00:17:08.664 "dma_device_id": "system", 00:17:08.664 "dma_device_type": 1 00:17:08.664 }, 00:17:08.664 { 00:17:08.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.664 "dma_device_type": 2 00:17:08.664 }, 00:17:08.664 { 00:17:08.664 "dma_device_id": "system", 00:17:08.664 "dma_device_type": 1 00:17:08.664 }, 00:17:08.664 { 00:17:08.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.664 "dma_device_type": 2 00:17:08.664 } 00:17:08.664 ], 00:17:08.664 "driver_specific": { 00:17:08.664 "raid": { 00:17:08.664 "uuid": "1a8862da-ed0a-4907-b10f-8d1891948c22", 00:17:08.664 "strip_size_kb": 64, 00:17:08.664 "state": "online", 00:17:08.664 "raid_level": "concat", 00:17:08.664 "superblock": true, 00:17:08.664 "num_base_bdevs": 4, 00:17:08.664 "num_base_bdevs_discovered": 4, 00:17:08.664 "num_base_bdevs_operational": 4, 00:17:08.664 "base_bdevs_list": [ 00:17:08.664 { 00:17:08.664 "name": "BaseBdev1", 00:17:08.664 "uuid": "1ff0b965-a188-425d-8a9f-2cf90cb7b9ef", 00:17:08.664 "is_configured": true, 00:17:08.664 "data_offset": 2048, 00:17:08.664 "data_size": 63488 00:17:08.664 }, 00:17:08.664 { 00:17:08.664 "name": "BaseBdev2", 00:17:08.664 "uuid": "3975de03-010d-4cfc-af21-e304151de41d", 00:17:08.664 "is_configured": true, 00:17:08.664 "data_offset": 2048, 00:17:08.664 "data_size": 63488 00:17:08.664 }, 00:17:08.664 { 00:17:08.664 "name": "BaseBdev3", 00:17:08.664 "uuid": "db48aa4e-12ec-48cf-b5a6-9427f456b4f0", 00:17:08.664 "is_configured": true, 00:17:08.664 "data_offset": 2048, 00:17:08.664 "data_size": 63488 00:17:08.664 }, 00:17:08.664 { 00:17:08.664 "name": "BaseBdev4", 00:17:08.664 "uuid": "2766f558-ecbb-41cf-99e7-b4661b3d4e98", 00:17:08.664 "is_configured": true, 00:17:08.664 "data_offset": 2048, 00:17:08.664 "data_size": 63488 00:17:08.664 } 00:17:08.664 ] 00:17:08.664 } 00:17:08.664 } 00:17:08.664 }' 00:17:08.664 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:08.664 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:08.664 BaseBdev2 00:17:08.664 BaseBdev3 00:17:08.664 BaseBdev4' 00:17:08.664 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.664 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:08.664 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.923 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:08.923 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.923 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.923 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.923 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.923 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.924 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.924 [2024-10-15 09:16:52.812726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.924 [2024-10-15 09:16:52.812771] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.924 [2024-10-15 09:16:52.812844] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.182 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.182 "name": "Existed_Raid", 00:17:09.182 "uuid": "1a8862da-ed0a-4907-b10f-8d1891948c22", 00:17:09.182 "strip_size_kb": 64, 00:17:09.182 "state": "offline", 00:17:09.182 "raid_level": "concat", 00:17:09.182 "superblock": true, 00:17:09.182 "num_base_bdevs": 4, 00:17:09.182 "num_base_bdevs_discovered": 3, 00:17:09.182 "num_base_bdevs_operational": 3, 00:17:09.183 "base_bdevs_list": [ 00:17:09.183 { 00:17:09.183 "name": null, 00:17:09.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.183 "is_configured": false, 00:17:09.183 "data_offset": 0, 00:17:09.183 "data_size": 63488 00:17:09.183 }, 00:17:09.183 { 00:17:09.183 "name": "BaseBdev2", 00:17:09.183 "uuid": "3975de03-010d-4cfc-af21-e304151de41d", 00:17:09.183 "is_configured": true, 00:17:09.183 "data_offset": 2048, 00:17:09.183 "data_size": 63488 00:17:09.183 }, 00:17:09.183 { 00:17:09.183 "name": "BaseBdev3", 00:17:09.183 "uuid": "db48aa4e-12ec-48cf-b5a6-9427f456b4f0", 00:17:09.183 "is_configured": true, 00:17:09.183 "data_offset": 2048, 00:17:09.183 "data_size": 63488 00:17:09.183 }, 00:17:09.183 { 00:17:09.183 "name": "BaseBdev4", 00:17:09.183 "uuid": "2766f558-ecbb-41cf-99e7-b4661b3d4e98", 00:17:09.183 "is_configured": true, 00:17:09.183 "data_offset": 2048, 00:17:09.183 "data_size": 63488 00:17:09.183 } 00:17:09.183 ] 00:17:09.183 }' 00:17:09.183 09:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.183 09:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.751 [2024-10-15 09:16:53.509183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.751 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.751 [2024-10-15 09:16:53.663741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.010 [2024-10-15 09:16:53.810017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:10.010 [2024-10-15 09:16:53.810087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.010 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.271 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:10.271 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:10.271 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:10.271 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:10.271 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:10.271 09:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:10.271 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.271 09:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.271 BaseBdev2 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.271 [ 00:17:10.271 { 00:17:10.271 "name": "BaseBdev2", 00:17:10.271 "aliases": [ 00:17:10.271 "4a0a6e09-ce9f-4802-9552-1806284bcd93" 00:17:10.271 ], 00:17:10.271 "product_name": "Malloc disk", 00:17:10.271 "block_size": 512, 00:17:10.271 "num_blocks": 65536, 00:17:10.271 "uuid": "4a0a6e09-ce9f-4802-9552-1806284bcd93", 00:17:10.271 "assigned_rate_limits": { 00:17:10.271 "rw_ios_per_sec": 0, 00:17:10.271 "rw_mbytes_per_sec": 0, 00:17:10.271 "r_mbytes_per_sec": 0, 00:17:10.271 "w_mbytes_per_sec": 0 00:17:10.271 }, 00:17:10.271 "claimed": false, 00:17:10.271 "zoned": false, 00:17:10.271 "supported_io_types": { 00:17:10.271 "read": true, 00:17:10.271 "write": true, 00:17:10.271 "unmap": true, 00:17:10.271 "flush": true, 00:17:10.271 "reset": true, 00:17:10.271 "nvme_admin": false, 00:17:10.271 "nvme_io": false, 00:17:10.271 "nvme_io_md": false, 00:17:10.271 "write_zeroes": true, 00:17:10.271 "zcopy": true, 00:17:10.271 "get_zone_info": false, 00:17:10.271 "zone_management": false, 00:17:10.271 "zone_append": false, 00:17:10.271 "compare": false, 00:17:10.271 "compare_and_write": false, 00:17:10.271 "abort": true, 00:17:10.271 "seek_hole": false, 00:17:10.271 "seek_data": false, 00:17:10.271 "copy": true, 00:17:10.271 "nvme_iov_md": false 00:17:10.271 }, 00:17:10.271 "memory_domains": [ 00:17:10.271 { 00:17:10.271 "dma_device_id": "system", 00:17:10.271 "dma_device_type": 1 00:17:10.271 }, 00:17:10.271 { 00:17:10.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.271 "dma_device_type": 2 00:17:10.271 } 00:17:10.271 ], 00:17:10.271 "driver_specific": {} 00:17:10.271 } 00:17:10.271 ] 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.271 BaseBdev3 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.271 [ 00:17:10.271 { 00:17:10.271 "name": "BaseBdev3", 00:17:10.271 "aliases": [ 00:17:10.271 "22c73bf1-1555-4c1d-a6c9-4481c810625a" 00:17:10.271 ], 00:17:10.271 "product_name": "Malloc disk", 00:17:10.271 "block_size": 512, 00:17:10.271 "num_blocks": 65536, 00:17:10.271 "uuid": "22c73bf1-1555-4c1d-a6c9-4481c810625a", 00:17:10.271 "assigned_rate_limits": { 00:17:10.271 "rw_ios_per_sec": 0, 00:17:10.271 "rw_mbytes_per_sec": 0, 00:17:10.271 "r_mbytes_per_sec": 0, 00:17:10.271 "w_mbytes_per_sec": 0 00:17:10.271 }, 00:17:10.271 "claimed": false, 00:17:10.271 "zoned": false, 00:17:10.271 "supported_io_types": { 00:17:10.271 "read": true, 00:17:10.271 "write": true, 00:17:10.271 "unmap": true, 00:17:10.271 "flush": true, 00:17:10.271 "reset": true, 00:17:10.271 "nvme_admin": false, 00:17:10.271 "nvme_io": false, 00:17:10.271 "nvme_io_md": false, 00:17:10.271 "write_zeroes": true, 00:17:10.271 "zcopy": true, 00:17:10.271 "get_zone_info": false, 00:17:10.271 "zone_management": false, 00:17:10.271 "zone_append": false, 00:17:10.271 "compare": false, 00:17:10.271 "compare_and_write": false, 00:17:10.271 "abort": true, 00:17:10.271 "seek_hole": false, 00:17:10.271 "seek_data": false, 00:17:10.271 "copy": true, 00:17:10.271 "nvme_iov_md": false 00:17:10.271 }, 00:17:10.271 "memory_domains": [ 00:17:10.271 { 00:17:10.271 "dma_device_id": "system", 00:17:10.271 "dma_device_type": 1 00:17:10.271 }, 00:17:10.271 { 00:17:10.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.271 "dma_device_type": 2 00:17:10.271 } 00:17:10.271 ], 00:17:10.271 "driver_specific": {} 00:17:10.271 } 00:17:10.271 ] 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.271 BaseBdev4 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:10.271 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:10.272 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:10.272 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:10.272 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:10.272 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:10.272 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:10.272 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.272 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.272 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.272 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:10.272 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.272 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.532 [ 00:17:10.532 { 00:17:10.532 "name": "BaseBdev4", 00:17:10.532 "aliases": [ 00:17:10.532 "70f144e3-65aa-406c-ba77-627a66a9be86" 00:17:10.532 ], 00:17:10.532 "product_name": "Malloc disk", 00:17:10.532 "block_size": 512, 00:17:10.532 "num_blocks": 65536, 00:17:10.532 "uuid": "70f144e3-65aa-406c-ba77-627a66a9be86", 00:17:10.532 "assigned_rate_limits": { 00:17:10.532 "rw_ios_per_sec": 0, 00:17:10.532 "rw_mbytes_per_sec": 0, 00:17:10.532 "r_mbytes_per_sec": 0, 00:17:10.532 "w_mbytes_per_sec": 0 00:17:10.532 }, 00:17:10.532 "claimed": false, 00:17:10.532 "zoned": false, 00:17:10.532 "supported_io_types": { 00:17:10.532 "read": true, 00:17:10.532 "write": true, 00:17:10.532 "unmap": true, 00:17:10.532 "flush": true, 00:17:10.532 "reset": true, 00:17:10.532 "nvme_admin": false, 00:17:10.532 "nvme_io": false, 00:17:10.532 "nvme_io_md": false, 00:17:10.532 "write_zeroes": true, 00:17:10.532 "zcopy": true, 00:17:10.532 "get_zone_info": false, 00:17:10.532 "zone_management": false, 00:17:10.532 "zone_append": false, 00:17:10.532 "compare": false, 00:17:10.532 "compare_and_write": false, 00:17:10.532 "abort": true, 00:17:10.532 "seek_hole": false, 00:17:10.532 "seek_data": false, 00:17:10.532 "copy": true, 00:17:10.532 "nvme_iov_md": false 00:17:10.532 }, 00:17:10.532 "memory_domains": [ 00:17:10.532 { 00:17:10.532 "dma_device_id": "system", 00:17:10.532 "dma_device_type": 1 00:17:10.532 }, 00:17:10.532 { 00:17:10.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.532 "dma_device_type": 2 00:17:10.532 } 00:17:10.532 ], 00:17:10.532 "driver_specific": {} 00:17:10.532 } 00:17:10.532 ] 00:17:10.532 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.532 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:10.532 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:10.532 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:10.532 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:10.532 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.532 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.532 [2024-10-15 09:16:54.217734] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:10.532 [2024-10-15 09:16:54.217794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:10.532 [2024-10-15 09:16:54.217845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:10.532 [2024-10-15 09:16:54.220706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:10.532 [2024-10-15 09:16:54.220772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:10.532 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.532 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:10.532 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:10.532 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:10.532 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:10.532 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.532 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:10.532 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.532 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.532 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.533 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.533 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.533 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.533 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.533 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.533 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.533 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.533 "name": "Existed_Raid", 00:17:10.533 "uuid": "e7f56b43-e288-46a1-8898-446dacb9e537", 00:17:10.533 "strip_size_kb": 64, 00:17:10.533 "state": "configuring", 00:17:10.533 "raid_level": "concat", 00:17:10.533 "superblock": true, 00:17:10.533 "num_base_bdevs": 4, 00:17:10.533 "num_base_bdevs_discovered": 3, 00:17:10.533 "num_base_bdevs_operational": 4, 00:17:10.533 "base_bdevs_list": [ 00:17:10.533 { 00:17:10.533 "name": "BaseBdev1", 00:17:10.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.533 "is_configured": false, 00:17:10.533 "data_offset": 0, 00:17:10.533 "data_size": 0 00:17:10.533 }, 00:17:10.533 { 00:17:10.533 "name": "BaseBdev2", 00:17:10.533 "uuid": "4a0a6e09-ce9f-4802-9552-1806284bcd93", 00:17:10.533 "is_configured": true, 00:17:10.533 "data_offset": 2048, 00:17:10.533 "data_size": 63488 00:17:10.533 }, 00:17:10.533 { 00:17:10.533 "name": "BaseBdev3", 00:17:10.533 "uuid": "22c73bf1-1555-4c1d-a6c9-4481c810625a", 00:17:10.533 "is_configured": true, 00:17:10.533 "data_offset": 2048, 00:17:10.533 "data_size": 63488 00:17:10.533 }, 00:17:10.533 { 00:17:10.533 "name": "BaseBdev4", 00:17:10.533 "uuid": "70f144e3-65aa-406c-ba77-627a66a9be86", 00:17:10.533 "is_configured": true, 00:17:10.533 "data_offset": 2048, 00:17:10.533 "data_size": 63488 00:17:10.533 } 00:17:10.533 ] 00:17:10.533 }' 00:17:10.533 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.533 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.125 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:11.125 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.125 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.125 [2024-10-15 09:16:54.753992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:11.125 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.126 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:11.126 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.126 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.126 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:11.126 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.126 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:11.126 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.126 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.126 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.126 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.126 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.126 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.126 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.126 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.126 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.126 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.126 "name": "Existed_Raid", 00:17:11.126 "uuid": "e7f56b43-e288-46a1-8898-446dacb9e537", 00:17:11.126 "strip_size_kb": 64, 00:17:11.126 "state": "configuring", 00:17:11.126 "raid_level": "concat", 00:17:11.126 "superblock": true, 00:17:11.126 "num_base_bdevs": 4, 00:17:11.126 "num_base_bdevs_discovered": 2, 00:17:11.126 "num_base_bdevs_operational": 4, 00:17:11.126 "base_bdevs_list": [ 00:17:11.126 { 00:17:11.126 "name": "BaseBdev1", 00:17:11.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.126 "is_configured": false, 00:17:11.126 "data_offset": 0, 00:17:11.126 "data_size": 0 00:17:11.126 }, 00:17:11.126 { 00:17:11.126 "name": null, 00:17:11.126 "uuid": "4a0a6e09-ce9f-4802-9552-1806284bcd93", 00:17:11.126 "is_configured": false, 00:17:11.126 "data_offset": 0, 00:17:11.126 "data_size": 63488 00:17:11.126 }, 00:17:11.126 { 00:17:11.126 "name": "BaseBdev3", 00:17:11.126 "uuid": "22c73bf1-1555-4c1d-a6c9-4481c810625a", 00:17:11.126 "is_configured": true, 00:17:11.126 "data_offset": 2048, 00:17:11.126 "data_size": 63488 00:17:11.126 }, 00:17:11.126 { 00:17:11.126 "name": "BaseBdev4", 00:17:11.126 "uuid": "70f144e3-65aa-406c-ba77-627a66a9be86", 00:17:11.126 "is_configured": true, 00:17:11.126 "data_offset": 2048, 00:17:11.126 "data_size": 63488 00:17:11.126 } 00:17:11.126 ] 00:17:11.126 }' 00:17:11.126 09:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.126 09:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.395 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.395 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:11.395 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.395 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.668 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.668 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:11.668 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:11.668 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.668 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.668 [2024-10-15 09:16:55.404530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.668 BaseBdev1 00:17:11.668 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.668 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.669 [ 00:17:11.669 { 00:17:11.669 "name": "BaseBdev1", 00:17:11.669 "aliases": [ 00:17:11.669 "36cf8c27-d3db-451c-a339-b64789a55acf" 00:17:11.669 ], 00:17:11.669 "product_name": "Malloc disk", 00:17:11.669 "block_size": 512, 00:17:11.669 "num_blocks": 65536, 00:17:11.669 "uuid": "36cf8c27-d3db-451c-a339-b64789a55acf", 00:17:11.669 "assigned_rate_limits": { 00:17:11.669 "rw_ios_per_sec": 0, 00:17:11.669 "rw_mbytes_per_sec": 0, 00:17:11.669 "r_mbytes_per_sec": 0, 00:17:11.669 "w_mbytes_per_sec": 0 00:17:11.669 }, 00:17:11.669 "claimed": true, 00:17:11.669 "claim_type": "exclusive_write", 00:17:11.669 "zoned": false, 00:17:11.669 "supported_io_types": { 00:17:11.669 "read": true, 00:17:11.669 "write": true, 00:17:11.669 "unmap": true, 00:17:11.669 "flush": true, 00:17:11.669 "reset": true, 00:17:11.669 "nvme_admin": false, 00:17:11.669 "nvme_io": false, 00:17:11.669 "nvme_io_md": false, 00:17:11.669 "write_zeroes": true, 00:17:11.669 "zcopy": true, 00:17:11.669 "get_zone_info": false, 00:17:11.669 "zone_management": false, 00:17:11.669 "zone_append": false, 00:17:11.669 "compare": false, 00:17:11.669 "compare_and_write": false, 00:17:11.669 "abort": true, 00:17:11.669 "seek_hole": false, 00:17:11.669 "seek_data": false, 00:17:11.669 "copy": true, 00:17:11.669 "nvme_iov_md": false 00:17:11.669 }, 00:17:11.669 "memory_domains": [ 00:17:11.669 { 00:17:11.669 "dma_device_id": "system", 00:17:11.669 "dma_device_type": 1 00:17:11.669 }, 00:17:11.669 { 00:17:11.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.669 "dma_device_type": 2 00:17:11.669 } 00:17:11.669 ], 00:17:11.669 "driver_specific": {} 00:17:11.669 } 00:17:11.669 ] 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.669 "name": "Existed_Raid", 00:17:11.669 "uuid": "e7f56b43-e288-46a1-8898-446dacb9e537", 00:17:11.669 "strip_size_kb": 64, 00:17:11.669 "state": "configuring", 00:17:11.669 "raid_level": "concat", 00:17:11.669 "superblock": true, 00:17:11.669 "num_base_bdevs": 4, 00:17:11.669 "num_base_bdevs_discovered": 3, 00:17:11.669 "num_base_bdevs_operational": 4, 00:17:11.669 "base_bdevs_list": [ 00:17:11.669 { 00:17:11.669 "name": "BaseBdev1", 00:17:11.669 "uuid": "36cf8c27-d3db-451c-a339-b64789a55acf", 00:17:11.669 "is_configured": true, 00:17:11.669 "data_offset": 2048, 00:17:11.669 "data_size": 63488 00:17:11.669 }, 00:17:11.669 { 00:17:11.669 "name": null, 00:17:11.669 "uuid": "4a0a6e09-ce9f-4802-9552-1806284bcd93", 00:17:11.669 "is_configured": false, 00:17:11.669 "data_offset": 0, 00:17:11.669 "data_size": 63488 00:17:11.669 }, 00:17:11.669 { 00:17:11.669 "name": "BaseBdev3", 00:17:11.669 "uuid": "22c73bf1-1555-4c1d-a6c9-4481c810625a", 00:17:11.669 "is_configured": true, 00:17:11.669 "data_offset": 2048, 00:17:11.669 "data_size": 63488 00:17:11.669 }, 00:17:11.669 { 00:17:11.669 "name": "BaseBdev4", 00:17:11.669 "uuid": "70f144e3-65aa-406c-ba77-627a66a9be86", 00:17:11.669 "is_configured": true, 00:17:11.669 "data_offset": 2048, 00:17:11.669 "data_size": 63488 00:17:11.669 } 00:17:11.669 ] 00:17:11.669 }' 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.669 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.258 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:12.258 09:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.258 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.258 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.258 09:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.258 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:12.258 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:12.258 09:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.258 09:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.258 [2024-10-15 09:16:56.024855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.259 "name": "Existed_Raid", 00:17:12.259 "uuid": "e7f56b43-e288-46a1-8898-446dacb9e537", 00:17:12.259 "strip_size_kb": 64, 00:17:12.259 "state": "configuring", 00:17:12.259 "raid_level": "concat", 00:17:12.259 "superblock": true, 00:17:12.259 "num_base_bdevs": 4, 00:17:12.259 "num_base_bdevs_discovered": 2, 00:17:12.259 "num_base_bdevs_operational": 4, 00:17:12.259 "base_bdevs_list": [ 00:17:12.259 { 00:17:12.259 "name": "BaseBdev1", 00:17:12.259 "uuid": "36cf8c27-d3db-451c-a339-b64789a55acf", 00:17:12.259 "is_configured": true, 00:17:12.259 "data_offset": 2048, 00:17:12.259 "data_size": 63488 00:17:12.259 }, 00:17:12.259 { 00:17:12.259 "name": null, 00:17:12.259 "uuid": "4a0a6e09-ce9f-4802-9552-1806284bcd93", 00:17:12.259 "is_configured": false, 00:17:12.259 "data_offset": 0, 00:17:12.259 "data_size": 63488 00:17:12.259 }, 00:17:12.259 { 00:17:12.259 "name": null, 00:17:12.259 "uuid": "22c73bf1-1555-4c1d-a6c9-4481c810625a", 00:17:12.259 "is_configured": false, 00:17:12.259 "data_offset": 0, 00:17:12.259 "data_size": 63488 00:17:12.259 }, 00:17:12.259 { 00:17:12.259 "name": "BaseBdev4", 00:17:12.259 "uuid": "70f144e3-65aa-406c-ba77-627a66a9be86", 00:17:12.259 "is_configured": true, 00:17:12.259 "data_offset": 2048, 00:17:12.259 "data_size": 63488 00:17:12.259 } 00:17:12.259 ] 00:17:12.259 }' 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.259 09:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.828 [2024-10-15 09:16:56.649055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.828 "name": "Existed_Raid", 00:17:12.828 "uuid": "e7f56b43-e288-46a1-8898-446dacb9e537", 00:17:12.828 "strip_size_kb": 64, 00:17:12.828 "state": "configuring", 00:17:12.828 "raid_level": "concat", 00:17:12.828 "superblock": true, 00:17:12.828 "num_base_bdevs": 4, 00:17:12.828 "num_base_bdevs_discovered": 3, 00:17:12.828 "num_base_bdevs_operational": 4, 00:17:12.828 "base_bdevs_list": [ 00:17:12.828 { 00:17:12.828 "name": "BaseBdev1", 00:17:12.828 "uuid": "36cf8c27-d3db-451c-a339-b64789a55acf", 00:17:12.828 "is_configured": true, 00:17:12.828 "data_offset": 2048, 00:17:12.828 "data_size": 63488 00:17:12.828 }, 00:17:12.828 { 00:17:12.828 "name": null, 00:17:12.828 "uuid": "4a0a6e09-ce9f-4802-9552-1806284bcd93", 00:17:12.828 "is_configured": false, 00:17:12.828 "data_offset": 0, 00:17:12.828 "data_size": 63488 00:17:12.828 }, 00:17:12.828 { 00:17:12.828 "name": "BaseBdev3", 00:17:12.828 "uuid": "22c73bf1-1555-4c1d-a6c9-4481c810625a", 00:17:12.828 "is_configured": true, 00:17:12.828 "data_offset": 2048, 00:17:12.828 "data_size": 63488 00:17:12.828 }, 00:17:12.828 { 00:17:12.828 "name": "BaseBdev4", 00:17:12.828 "uuid": "70f144e3-65aa-406c-ba77-627a66a9be86", 00:17:12.828 "is_configured": true, 00:17:12.828 "data_offset": 2048, 00:17:12.828 "data_size": 63488 00:17:12.828 } 00:17:12.828 ] 00:17:12.828 }' 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.828 09:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.396 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:13.396 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.396 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.396 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.396 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.396 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:13.396 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:13.396 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.396 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.396 [2024-10-15 09:16:57.257285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.655 "name": "Existed_Raid", 00:17:13.655 "uuid": "e7f56b43-e288-46a1-8898-446dacb9e537", 00:17:13.655 "strip_size_kb": 64, 00:17:13.655 "state": "configuring", 00:17:13.655 "raid_level": "concat", 00:17:13.655 "superblock": true, 00:17:13.655 "num_base_bdevs": 4, 00:17:13.655 "num_base_bdevs_discovered": 2, 00:17:13.655 "num_base_bdevs_operational": 4, 00:17:13.655 "base_bdevs_list": [ 00:17:13.655 { 00:17:13.655 "name": null, 00:17:13.655 "uuid": "36cf8c27-d3db-451c-a339-b64789a55acf", 00:17:13.655 "is_configured": false, 00:17:13.655 "data_offset": 0, 00:17:13.655 "data_size": 63488 00:17:13.655 }, 00:17:13.655 { 00:17:13.655 "name": null, 00:17:13.655 "uuid": "4a0a6e09-ce9f-4802-9552-1806284bcd93", 00:17:13.655 "is_configured": false, 00:17:13.655 "data_offset": 0, 00:17:13.655 "data_size": 63488 00:17:13.655 }, 00:17:13.655 { 00:17:13.655 "name": "BaseBdev3", 00:17:13.655 "uuid": "22c73bf1-1555-4c1d-a6c9-4481c810625a", 00:17:13.655 "is_configured": true, 00:17:13.655 "data_offset": 2048, 00:17:13.655 "data_size": 63488 00:17:13.655 }, 00:17:13.655 { 00:17:13.655 "name": "BaseBdev4", 00:17:13.655 "uuid": "70f144e3-65aa-406c-ba77-627a66a9be86", 00:17:13.655 "is_configured": true, 00:17:13.655 "data_offset": 2048, 00:17:13.655 "data_size": 63488 00:17:13.655 } 00:17:13.655 ] 00:17:13.655 }' 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.655 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.221 [2024-10-15 09:16:57.962151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.221 09:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.221 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.221 "name": "Existed_Raid", 00:17:14.221 "uuid": "e7f56b43-e288-46a1-8898-446dacb9e537", 00:17:14.221 "strip_size_kb": 64, 00:17:14.221 "state": "configuring", 00:17:14.221 "raid_level": "concat", 00:17:14.221 "superblock": true, 00:17:14.221 "num_base_bdevs": 4, 00:17:14.221 "num_base_bdevs_discovered": 3, 00:17:14.221 "num_base_bdevs_operational": 4, 00:17:14.221 "base_bdevs_list": [ 00:17:14.221 { 00:17:14.221 "name": null, 00:17:14.221 "uuid": "36cf8c27-d3db-451c-a339-b64789a55acf", 00:17:14.221 "is_configured": false, 00:17:14.221 "data_offset": 0, 00:17:14.221 "data_size": 63488 00:17:14.221 }, 00:17:14.221 { 00:17:14.221 "name": "BaseBdev2", 00:17:14.221 "uuid": "4a0a6e09-ce9f-4802-9552-1806284bcd93", 00:17:14.221 "is_configured": true, 00:17:14.221 "data_offset": 2048, 00:17:14.221 "data_size": 63488 00:17:14.221 }, 00:17:14.221 { 00:17:14.221 "name": "BaseBdev3", 00:17:14.221 "uuid": "22c73bf1-1555-4c1d-a6c9-4481c810625a", 00:17:14.221 "is_configured": true, 00:17:14.221 "data_offset": 2048, 00:17:14.221 "data_size": 63488 00:17:14.221 }, 00:17:14.221 { 00:17:14.221 "name": "BaseBdev4", 00:17:14.221 "uuid": "70f144e3-65aa-406c-ba77-627a66a9be86", 00:17:14.221 "is_configured": true, 00:17:14.221 "data_offset": 2048, 00:17:14.221 "data_size": 63488 00:17:14.221 } 00:17:14.221 ] 00:17:14.221 }' 00:17:14.221 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.221 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 36cf8c27-d3db-451c-a339-b64789a55acf 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.800 [2024-10-15 09:16:58.661857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:14.800 [2024-10-15 09:16:58.662462] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:14.800 NewBaseBdev 00:17:14.800 [2024-10-15 09:16:58.662601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:14.800 [2024-10-15 09:16:58.662964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:14.800 [2024-10-15 09:16:58.663176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:14.800 [2024-10-15 09:16:58.663199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:14.800 [2024-10-15 09:16:58.663369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.800 [ 00:17:14.800 { 00:17:14.800 "name": "NewBaseBdev", 00:17:14.800 "aliases": [ 00:17:14.800 "36cf8c27-d3db-451c-a339-b64789a55acf" 00:17:14.800 ], 00:17:14.800 "product_name": "Malloc disk", 00:17:14.800 "block_size": 512, 00:17:14.800 "num_blocks": 65536, 00:17:14.800 "uuid": "36cf8c27-d3db-451c-a339-b64789a55acf", 00:17:14.800 "assigned_rate_limits": { 00:17:14.800 "rw_ios_per_sec": 0, 00:17:14.800 "rw_mbytes_per_sec": 0, 00:17:14.800 "r_mbytes_per_sec": 0, 00:17:14.800 "w_mbytes_per_sec": 0 00:17:14.800 }, 00:17:14.800 "claimed": true, 00:17:14.800 "claim_type": "exclusive_write", 00:17:14.800 "zoned": false, 00:17:14.800 "supported_io_types": { 00:17:14.800 "read": true, 00:17:14.800 "write": true, 00:17:14.800 "unmap": true, 00:17:14.800 "flush": true, 00:17:14.800 "reset": true, 00:17:14.800 "nvme_admin": false, 00:17:14.800 "nvme_io": false, 00:17:14.800 "nvme_io_md": false, 00:17:14.800 "write_zeroes": true, 00:17:14.800 "zcopy": true, 00:17:14.800 "get_zone_info": false, 00:17:14.800 "zone_management": false, 00:17:14.800 "zone_append": false, 00:17:14.800 "compare": false, 00:17:14.800 "compare_and_write": false, 00:17:14.800 "abort": true, 00:17:14.800 "seek_hole": false, 00:17:14.800 "seek_data": false, 00:17:14.800 "copy": true, 00:17:14.800 "nvme_iov_md": false 00:17:14.800 }, 00:17:14.800 "memory_domains": [ 00:17:14.800 { 00:17:14.800 "dma_device_id": "system", 00:17:14.800 "dma_device_type": 1 00:17:14.800 }, 00:17:14.800 { 00:17:14.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.800 "dma_device_type": 2 00:17:14.800 } 00:17:14.800 ], 00:17:14.800 "driver_specific": {} 00:17:14.800 } 00:17:14.800 ] 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.800 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.090 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.090 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.090 "name": "Existed_Raid", 00:17:15.090 "uuid": "e7f56b43-e288-46a1-8898-446dacb9e537", 00:17:15.090 "strip_size_kb": 64, 00:17:15.090 "state": "online", 00:17:15.090 "raid_level": "concat", 00:17:15.090 "superblock": true, 00:17:15.090 "num_base_bdevs": 4, 00:17:15.090 "num_base_bdevs_discovered": 4, 00:17:15.090 "num_base_bdevs_operational": 4, 00:17:15.090 "base_bdevs_list": [ 00:17:15.090 { 00:17:15.090 "name": "NewBaseBdev", 00:17:15.090 "uuid": "36cf8c27-d3db-451c-a339-b64789a55acf", 00:17:15.090 "is_configured": true, 00:17:15.090 "data_offset": 2048, 00:17:15.090 "data_size": 63488 00:17:15.090 }, 00:17:15.090 { 00:17:15.090 "name": "BaseBdev2", 00:17:15.090 "uuid": "4a0a6e09-ce9f-4802-9552-1806284bcd93", 00:17:15.090 "is_configured": true, 00:17:15.090 "data_offset": 2048, 00:17:15.090 "data_size": 63488 00:17:15.090 }, 00:17:15.090 { 00:17:15.090 "name": "BaseBdev3", 00:17:15.090 "uuid": "22c73bf1-1555-4c1d-a6c9-4481c810625a", 00:17:15.090 "is_configured": true, 00:17:15.090 "data_offset": 2048, 00:17:15.090 "data_size": 63488 00:17:15.090 }, 00:17:15.090 { 00:17:15.090 "name": "BaseBdev4", 00:17:15.090 "uuid": "70f144e3-65aa-406c-ba77-627a66a9be86", 00:17:15.090 "is_configured": true, 00:17:15.090 "data_offset": 2048, 00:17:15.090 "data_size": 63488 00:17:15.090 } 00:17:15.090 ] 00:17:15.090 }' 00:17:15.090 09:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.090 09:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.349 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:15.349 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:15.349 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:15.349 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:15.349 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:15.349 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:15.349 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:15.349 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.349 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.349 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:15.349 [2024-10-15 09:16:59.218574] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.349 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.349 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:15.349 "name": "Existed_Raid", 00:17:15.349 "aliases": [ 00:17:15.349 "e7f56b43-e288-46a1-8898-446dacb9e537" 00:17:15.349 ], 00:17:15.349 "product_name": "Raid Volume", 00:17:15.349 "block_size": 512, 00:17:15.349 "num_blocks": 253952, 00:17:15.349 "uuid": "e7f56b43-e288-46a1-8898-446dacb9e537", 00:17:15.349 "assigned_rate_limits": { 00:17:15.349 "rw_ios_per_sec": 0, 00:17:15.349 "rw_mbytes_per_sec": 0, 00:17:15.349 "r_mbytes_per_sec": 0, 00:17:15.349 "w_mbytes_per_sec": 0 00:17:15.349 }, 00:17:15.349 "claimed": false, 00:17:15.349 "zoned": false, 00:17:15.349 "supported_io_types": { 00:17:15.349 "read": true, 00:17:15.349 "write": true, 00:17:15.349 "unmap": true, 00:17:15.349 "flush": true, 00:17:15.349 "reset": true, 00:17:15.349 "nvme_admin": false, 00:17:15.349 "nvme_io": false, 00:17:15.349 "nvme_io_md": false, 00:17:15.349 "write_zeroes": true, 00:17:15.349 "zcopy": false, 00:17:15.349 "get_zone_info": false, 00:17:15.349 "zone_management": false, 00:17:15.349 "zone_append": false, 00:17:15.349 "compare": false, 00:17:15.349 "compare_and_write": false, 00:17:15.349 "abort": false, 00:17:15.349 "seek_hole": false, 00:17:15.349 "seek_data": false, 00:17:15.349 "copy": false, 00:17:15.349 "nvme_iov_md": false 00:17:15.349 }, 00:17:15.349 "memory_domains": [ 00:17:15.349 { 00:17:15.349 "dma_device_id": "system", 00:17:15.349 "dma_device_type": 1 00:17:15.349 }, 00:17:15.349 { 00:17:15.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.349 "dma_device_type": 2 00:17:15.349 }, 00:17:15.349 { 00:17:15.349 "dma_device_id": "system", 00:17:15.349 "dma_device_type": 1 00:17:15.349 }, 00:17:15.349 { 00:17:15.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.349 "dma_device_type": 2 00:17:15.349 }, 00:17:15.349 { 00:17:15.349 "dma_device_id": "system", 00:17:15.349 "dma_device_type": 1 00:17:15.349 }, 00:17:15.349 { 00:17:15.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.349 "dma_device_type": 2 00:17:15.349 }, 00:17:15.349 { 00:17:15.349 "dma_device_id": "system", 00:17:15.349 "dma_device_type": 1 00:17:15.349 }, 00:17:15.349 { 00:17:15.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.349 "dma_device_type": 2 00:17:15.349 } 00:17:15.349 ], 00:17:15.349 "driver_specific": { 00:17:15.349 "raid": { 00:17:15.349 "uuid": "e7f56b43-e288-46a1-8898-446dacb9e537", 00:17:15.349 "strip_size_kb": 64, 00:17:15.349 "state": "online", 00:17:15.349 "raid_level": "concat", 00:17:15.349 "superblock": true, 00:17:15.349 "num_base_bdevs": 4, 00:17:15.349 "num_base_bdevs_discovered": 4, 00:17:15.349 "num_base_bdevs_operational": 4, 00:17:15.349 "base_bdevs_list": [ 00:17:15.349 { 00:17:15.349 "name": "NewBaseBdev", 00:17:15.349 "uuid": "36cf8c27-d3db-451c-a339-b64789a55acf", 00:17:15.349 "is_configured": true, 00:17:15.349 "data_offset": 2048, 00:17:15.349 "data_size": 63488 00:17:15.349 }, 00:17:15.349 { 00:17:15.349 "name": "BaseBdev2", 00:17:15.349 "uuid": "4a0a6e09-ce9f-4802-9552-1806284bcd93", 00:17:15.349 "is_configured": true, 00:17:15.349 "data_offset": 2048, 00:17:15.349 "data_size": 63488 00:17:15.349 }, 00:17:15.349 { 00:17:15.349 "name": "BaseBdev3", 00:17:15.349 "uuid": "22c73bf1-1555-4c1d-a6c9-4481c810625a", 00:17:15.349 "is_configured": true, 00:17:15.349 "data_offset": 2048, 00:17:15.349 "data_size": 63488 00:17:15.349 }, 00:17:15.349 { 00:17:15.349 "name": "BaseBdev4", 00:17:15.349 "uuid": "70f144e3-65aa-406c-ba77-627a66a9be86", 00:17:15.349 "is_configured": true, 00:17:15.349 "data_offset": 2048, 00:17:15.349 "data_size": 63488 00:17:15.349 } 00:17:15.349 ] 00:17:15.349 } 00:17:15.349 } 00:17:15.349 }' 00:17:15.349 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:15.607 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:15.607 BaseBdev2 00:17:15.607 BaseBdev3 00:17:15.607 BaseBdev4' 00:17:15.607 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.607 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:15.607 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.607 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.608 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.867 [2024-10-15 09:16:59.602158] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:15.867 [2024-10-15 09:16:59.602314] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.867 [2024-10-15 09:16:59.602518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.867 [2024-10-15 09:16:59.602737] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.867 [2024-10-15 09:16:59.602854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72308 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72308 ']' 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72308 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72308 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72308' 00:17:15.867 killing process with pid 72308 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72308 00:17:15.867 [2024-10-15 09:16:59.643179] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:15.867 09:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72308 00:17:16.127 [2024-10-15 09:17:00.031704] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:17.506 09:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:17.506 ************************************ 00:17:17.506 END TEST raid_state_function_test_sb 00:17:17.506 ************************************ 00:17:17.506 00:17:17.506 real 0m13.362s 00:17:17.506 user 0m21.992s 00:17:17.506 sys 0m1.953s 00:17:17.506 09:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:17.506 09:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.506 09:17:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:17:17.506 09:17:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:17.506 09:17:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:17.506 09:17:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.506 ************************************ 00:17:17.506 START TEST raid_superblock_test 00:17:17.506 ************************************ 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72996 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72996 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72996 ']' 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:17.506 09:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.506 [2024-10-15 09:17:01.351242] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:17:17.506 [2024-10-15 09:17:01.351477] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72996 ] 00:17:17.764 [2024-10-15 09:17:01.517301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.764 [2024-10-15 09:17:01.662947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.022 [2024-10-15 09:17:01.889635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.022 [2024-10-15 09:17:01.889692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.590 malloc1 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.590 [2024-10-15 09:17:02.448649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:18.590 [2024-10-15 09:17:02.448916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.590 [2024-10-15 09:17:02.449078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:18.590 [2024-10-15 09:17:02.449236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.590 [2024-10-15 09:17:02.452544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.590 [2024-10-15 09:17:02.452721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:18.590 pt1 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.590 malloc2 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.590 [2024-10-15 09:17:02.508334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:18.590 [2024-10-15 09:17:02.508560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.590 [2024-10-15 09:17:02.508610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:18.590 [2024-10-15 09:17:02.508626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.590 [2024-10-15 09:17:02.511702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.590 [2024-10-15 09:17:02.511759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:18.590 pt2 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.590 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.849 malloc3 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.849 [2024-10-15 09:17:02.575749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:18.849 [2024-10-15 09:17:02.575959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.849 [2024-10-15 09:17:02.576043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:18.849 [2024-10-15 09:17:02.576172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.849 [2024-10-15 09:17:02.579261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.849 [2024-10-15 09:17:02.579416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:18.849 pt3 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.849 malloc4 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.849 [2024-10-15 09:17:02.635302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:18.849 [2024-10-15 09:17:02.635377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.849 [2024-10-15 09:17:02.635409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:18.849 [2024-10-15 09:17:02.635424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.849 [2024-10-15 09:17:02.638376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.849 pt4 00:17:18.849 [2024-10-15 09:17:02.638546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.849 [2024-10-15 09:17:02.643484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:18.849 [2024-10-15 09:17:02.646091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:18.849 [2024-10-15 09:17:02.646212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:18.849 [2024-10-15 09:17:02.646310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:18.849 [2024-10-15 09:17:02.646578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:18.849 [2024-10-15 09:17:02.646597] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:18.849 [2024-10-15 09:17:02.646974] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:18.849 [2024-10-15 09:17:02.647214] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:18.849 [2024-10-15 09:17:02.647235] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:18.849 [2024-10-15 09:17:02.647507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.849 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:18.850 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.850 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:18.850 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.850 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.850 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.850 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.850 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.850 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.850 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.850 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.850 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.850 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.850 "name": "raid_bdev1", 00:17:18.850 "uuid": "3bae0db3-d1e6-43df-bd47-f0a9228e3da7", 00:17:18.850 "strip_size_kb": 64, 00:17:18.850 "state": "online", 00:17:18.850 "raid_level": "concat", 00:17:18.850 "superblock": true, 00:17:18.850 "num_base_bdevs": 4, 00:17:18.850 "num_base_bdevs_discovered": 4, 00:17:18.850 "num_base_bdevs_operational": 4, 00:17:18.850 "base_bdevs_list": [ 00:17:18.850 { 00:17:18.850 "name": "pt1", 00:17:18.850 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:18.850 "is_configured": true, 00:17:18.850 "data_offset": 2048, 00:17:18.850 "data_size": 63488 00:17:18.850 }, 00:17:18.850 { 00:17:18.850 "name": "pt2", 00:17:18.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.850 "is_configured": true, 00:17:18.850 "data_offset": 2048, 00:17:18.850 "data_size": 63488 00:17:18.850 }, 00:17:18.850 { 00:17:18.850 "name": "pt3", 00:17:18.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:18.850 "is_configured": true, 00:17:18.850 "data_offset": 2048, 00:17:18.850 "data_size": 63488 00:17:18.850 }, 00:17:18.850 { 00:17:18.850 "name": "pt4", 00:17:18.850 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:18.850 "is_configured": true, 00:17:18.850 "data_offset": 2048, 00:17:18.850 "data_size": 63488 00:17:18.850 } 00:17:18.850 ] 00:17:18.850 }' 00:17:18.850 09:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.850 09:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.424 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:19.424 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:19.424 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:19.424 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:19.424 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:19.424 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:19.424 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:19.424 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.424 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.424 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.424 [2024-10-15 09:17:03.200032] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.424 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.424 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:19.424 "name": "raid_bdev1", 00:17:19.424 "aliases": [ 00:17:19.424 "3bae0db3-d1e6-43df-bd47-f0a9228e3da7" 00:17:19.424 ], 00:17:19.424 "product_name": "Raid Volume", 00:17:19.424 "block_size": 512, 00:17:19.424 "num_blocks": 253952, 00:17:19.424 "uuid": "3bae0db3-d1e6-43df-bd47-f0a9228e3da7", 00:17:19.424 "assigned_rate_limits": { 00:17:19.424 "rw_ios_per_sec": 0, 00:17:19.424 "rw_mbytes_per_sec": 0, 00:17:19.424 "r_mbytes_per_sec": 0, 00:17:19.424 "w_mbytes_per_sec": 0 00:17:19.424 }, 00:17:19.424 "claimed": false, 00:17:19.424 "zoned": false, 00:17:19.424 "supported_io_types": { 00:17:19.424 "read": true, 00:17:19.424 "write": true, 00:17:19.424 "unmap": true, 00:17:19.424 "flush": true, 00:17:19.424 "reset": true, 00:17:19.424 "nvme_admin": false, 00:17:19.424 "nvme_io": false, 00:17:19.424 "nvme_io_md": false, 00:17:19.424 "write_zeroes": true, 00:17:19.424 "zcopy": false, 00:17:19.424 "get_zone_info": false, 00:17:19.424 "zone_management": false, 00:17:19.424 "zone_append": false, 00:17:19.424 "compare": false, 00:17:19.424 "compare_and_write": false, 00:17:19.424 "abort": false, 00:17:19.424 "seek_hole": false, 00:17:19.424 "seek_data": false, 00:17:19.424 "copy": false, 00:17:19.424 "nvme_iov_md": false 00:17:19.424 }, 00:17:19.424 "memory_domains": [ 00:17:19.424 { 00:17:19.424 "dma_device_id": "system", 00:17:19.424 "dma_device_type": 1 00:17:19.424 }, 00:17:19.424 { 00:17:19.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.424 "dma_device_type": 2 00:17:19.424 }, 00:17:19.424 { 00:17:19.424 "dma_device_id": "system", 00:17:19.424 "dma_device_type": 1 00:17:19.424 }, 00:17:19.424 { 00:17:19.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.424 "dma_device_type": 2 00:17:19.424 }, 00:17:19.424 { 00:17:19.424 "dma_device_id": "system", 00:17:19.424 "dma_device_type": 1 00:17:19.424 }, 00:17:19.424 { 00:17:19.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.424 "dma_device_type": 2 00:17:19.424 }, 00:17:19.424 { 00:17:19.424 "dma_device_id": "system", 00:17:19.424 "dma_device_type": 1 00:17:19.424 }, 00:17:19.424 { 00:17:19.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.424 "dma_device_type": 2 00:17:19.424 } 00:17:19.424 ], 00:17:19.424 "driver_specific": { 00:17:19.424 "raid": { 00:17:19.424 "uuid": "3bae0db3-d1e6-43df-bd47-f0a9228e3da7", 00:17:19.424 "strip_size_kb": 64, 00:17:19.424 "state": "online", 00:17:19.424 "raid_level": "concat", 00:17:19.424 "superblock": true, 00:17:19.424 "num_base_bdevs": 4, 00:17:19.424 "num_base_bdevs_discovered": 4, 00:17:19.424 "num_base_bdevs_operational": 4, 00:17:19.424 "base_bdevs_list": [ 00:17:19.424 { 00:17:19.424 "name": "pt1", 00:17:19.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.424 "is_configured": true, 00:17:19.424 "data_offset": 2048, 00:17:19.424 "data_size": 63488 00:17:19.424 }, 00:17:19.424 { 00:17:19.424 "name": "pt2", 00:17:19.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.424 "is_configured": true, 00:17:19.424 "data_offset": 2048, 00:17:19.424 "data_size": 63488 00:17:19.424 }, 00:17:19.424 { 00:17:19.424 "name": "pt3", 00:17:19.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:19.424 "is_configured": true, 00:17:19.424 "data_offset": 2048, 00:17:19.424 "data_size": 63488 00:17:19.424 }, 00:17:19.424 { 00:17:19.424 "name": "pt4", 00:17:19.424 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:19.424 "is_configured": true, 00:17:19.424 "data_offset": 2048, 00:17:19.424 "data_size": 63488 00:17:19.424 } 00:17:19.424 ] 00:17:19.424 } 00:17:19.424 } 00:17:19.424 }' 00:17:19.424 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:19.424 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:19.424 pt2 00:17:19.424 pt3 00:17:19.424 pt4' 00:17:19.424 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.424 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:19.424 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.682 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:19.682 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.682 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.682 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.682 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.682 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:19.682 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:19.682 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.682 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:19.682 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.682 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.682 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.683 [2024-10-15 09:17:03.580131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.683 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3bae0db3-d1e6-43df-bd47-f0a9228e3da7 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3bae0db3-d1e6-43df-bd47-f0a9228e3da7 ']' 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.941 [2024-10-15 09:17:03.631722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.941 [2024-10-15 09:17:03.631881] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.941 [2024-10-15 09:17:03.632141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.941 [2024-10-15 09:17:03.632344] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.941 [2024-10-15 09:17:03.632499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.941 [2024-10-15 09:17:03.799799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:19.941 [2024-10-15 09:17:03.802615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:19.941 [2024-10-15 09:17:03.802688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:19.941 [2024-10-15 09:17:03.802743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:19.941 [2024-10-15 09:17:03.802823] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:19.941 [2024-10-15 09:17:03.802909] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:19.941 [2024-10-15 09:17:03.802946] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:19.941 [2024-10-15 09:17:03.802978] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:19.941 [2024-10-15 09:17:03.803001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.941 [2024-10-15 09:17:03.803022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:19.941 request: 00:17:19.941 { 00:17:19.941 "name": "raid_bdev1", 00:17:19.941 "raid_level": "concat", 00:17:19.941 "base_bdevs": [ 00:17:19.941 "malloc1", 00:17:19.941 "malloc2", 00:17:19.941 "malloc3", 00:17:19.941 "malloc4" 00:17:19.941 ], 00:17:19.941 "strip_size_kb": 64, 00:17:19.941 "superblock": false, 00:17:19.941 "method": "bdev_raid_create", 00:17:19.941 "req_id": 1 00:17:19.941 } 00:17:19.941 Got JSON-RPC error response 00:17:19.941 response: 00:17:19.941 { 00:17:19.941 "code": -17, 00:17:19.941 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:19.941 } 00:17:19.941 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.942 [2024-10-15 09:17:03.859953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:19.942 [2024-10-15 09:17:03.860180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.942 [2024-10-15 09:17:03.860255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:19.942 [2024-10-15 09:17:03.860452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.942 [2024-10-15 09:17:03.863591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.942 [2024-10-15 09:17:03.863753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:19.942 [2024-10-15 09:17:03.863979] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:19.942 [2024-10-15 09:17:03.864177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:19.942 pt1 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.942 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.199 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.199 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.199 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.199 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.199 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.199 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.199 "name": "raid_bdev1", 00:17:20.199 "uuid": "3bae0db3-d1e6-43df-bd47-f0a9228e3da7", 00:17:20.199 "strip_size_kb": 64, 00:17:20.199 "state": "configuring", 00:17:20.199 "raid_level": "concat", 00:17:20.199 "superblock": true, 00:17:20.199 "num_base_bdevs": 4, 00:17:20.199 "num_base_bdevs_discovered": 1, 00:17:20.199 "num_base_bdevs_operational": 4, 00:17:20.199 "base_bdevs_list": [ 00:17:20.199 { 00:17:20.199 "name": "pt1", 00:17:20.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.199 "is_configured": true, 00:17:20.199 "data_offset": 2048, 00:17:20.199 "data_size": 63488 00:17:20.199 }, 00:17:20.199 { 00:17:20.199 "name": null, 00:17:20.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.199 "is_configured": false, 00:17:20.199 "data_offset": 2048, 00:17:20.199 "data_size": 63488 00:17:20.199 }, 00:17:20.199 { 00:17:20.199 "name": null, 00:17:20.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:20.199 "is_configured": false, 00:17:20.199 "data_offset": 2048, 00:17:20.199 "data_size": 63488 00:17:20.199 }, 00:17:20.199 { 00:17:20.199 "name": null, 00:17:20.199 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:20.199 "is_configured": false, 00:17:20.199 "data_offset": 2048, 00:17:20.199 "data_size": 63488 00:17:20.199 } 00:17:20.199 ] 00:17:20.199 }' 00:17:20.199 09:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.199 09:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.458 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:20.458 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:20.458 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.458 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.458 [2024-10-15 09:17:04.376237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:20.458 [2024-10-15 09:17:04.376485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.458 [2024-10-15 09:17:04.376528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:20.458 [2024-10-15 09:17:04.376549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.458 [2024-10-15 09:17:04.377217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.458 [2024-10-15 09:17:04.377256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:20.458 [2024-10-15 09:17:04.377373] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:20.458 [2024-10-15 09:17:04.377412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:20.458 pt2 00:17:20.458 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.458 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:20.458 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.458 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.716 [2024-10-15 09:17:04.384255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.716 "name": "raid_bdev1", 00:17:20.716 "uuid": "3bae0db3-d1e6-43df-bd47-f0a9228e3da7", 00:17:20.716 "strip_size_kb": 64, 00:17:20.716 "state": "configuring", 00:17:20.716 "raid_level": "concat", 00:17:20.716 "superblock": true, 00:17:20.716 "num_base_bdevs": 4, 00:17:20.716 "num_base_bdevs_discovered": 1, 00:17:20.716 "num_base_bdevs_operational": 4, 00:17:20.716 "base_bdevs_list": [ 00:17:20.716 { 00:17:20.716 "name": "pt1", 00:17:20.716 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.716 "is_configured": true, 00:17:20.716 "data_offset": 2048, 00:17:20.716 "data_size": 63488 00:17:20.716 }, 00:17:20.716 { 00:17:20.716 "name": null, 00:17:20.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.716 "is_configured": false, 00:17:20.716 "data_offset": 0, 00:17:20.716 "data_size": 63488 00:17:20.716 }, 00:17:20.716 { 00:17:20.716 "name": null, 00:17:20.716 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:20.716 "is_configured": false, 00:17:20.716 "data_offset": 2048, 00:17:20.716 "data_size": 63488 00:17:20.716 }, 00:17:20.716 { 00:17:20.716 "name": null, 00:17:20.716 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:20.716 "is_configured": false, 00:17:20.716 "data_offset": 2048, 00:17:20.716 "data_size": 63488 00:17:20.716 } 00:17:20.716 ] 00:17:20.716 }' 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.716 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.283 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:21.283 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:21.283 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:21.283 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.283 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.283 [2024-10-15 09:17:04.920424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:21.283 [2024-10-15 09:17:04.920656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.283 [2024-10-15 09:17:04.920739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:21.283 [2024-10-15 09:17:04.920760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.283 [2024-10-15 09:17:04.921404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.283 [2024-10-15 09:17:04.921430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:21.283 [2024-10-15 09:17:04.921554] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:21.283 [2024-10-15 09:17:04.921588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:21.283 pt2 00:17:21.283 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.283 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:21.283 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:21.283 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:21.283 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.283 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.283 [2024-10-15 09:17:04.928366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:21.283 [2024-10-15 09:17:04.928562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.283 [2024-10-15 09:17:04.928643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:21.283 [2024-10-15 09:17:04.928766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.283 [2024-10-15 09:17:04.929332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.283 [2024-10-15 09:17:04.929484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:21.283 [2024-10-15 09:17:04.929696] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:21.283 [2024-10-15 09:17:04.929843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:21.283 pt3 00:17:21.283 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.283 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:21.283 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:21.283 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:21.283 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.283 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.283 [2024-10-15 09:17:04.936333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:21.283 [2024-10-15 09:17:04.936507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.283 [2024-10-15 09:17:04.936547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:21.283 [2024-10-15 09:17:04.936562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.283 [2024-10-15 09:17:04.937068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.283 [2024-10-15 09:17:04.937103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:21.283 [2024-10-15 09:17:04.937210] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:21.283 [2024-10-15 09:17:04.937243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:21.283 [2024-10-15 09:17:04.937421] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:21.283 [2024-10-15 09:17:04.937436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:21.283 [2024-10-15 09:17:04.937747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:21.283 [2024-10-15 09:17:04.937932] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:21.283 [2024-10-15 09:17:04.937968] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:21.283 [2024-10-15 09:17:04.938157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.284 pt4 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.284 "name": "raid_bdev1", 00:17:21.284 "uuid": "3bae0db3-d1e6-43df-bd47-f0a9228e3da7", 00:17:21.284 "strip_size_kb": 64, 00:17:21.284 "state": "online", 00:17:21.284 "raid_level": "concat", 00:17:21.284 "superblock": true, 00:17:21.284 "num_base_bdevs": 4, 00:17:21.284 "num_base_bdevs_discovered": 4, 00:17:21.284 "num_base_bdevs_operational": 4, 00:17:21.284 "base_bdevs_list": [ 00:17:21.284 { 00:17:21.284 "name": "pt1", 00:17:21.284 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:21.284 "is_configured": true, 00:17:21.284 "data_offset": 2048, 00:17:21.284 "data_size": 63488 00:17:21.284 }, 00:17:21.284 { 00:17:21.284 "name": "pt2", 00:17:21.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.284 "is_configured": true, 00:17:21.284 "data_offset": 2048, 00:17:21.284 "data_size": 63488 00:17:21.284 }, 00:17:21.284 { 00:17:21.284 "name": "pt3", 00:17:21.284 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:21.284 "is_configured": true, 00:17:21.284 "data_offset": 2048, 00:17:21.284 "data_size": 63488 00:17:21.284 }, 00:17:21.284 { 00:17:21.284 "name": "pt4", 00:17:21.284 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:21.284 "is_configured": true, 00:17:21.284 "data_offset": 2048, 00:17:21.284 "data_size": 63488 00:17:21.284 } 00:17:21.284 ] 00:17:21.284 }' 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.284 09:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.542 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:21.542 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:21.542 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:21.542 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:21.542 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:21.542 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:21.542 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:21.542 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.542 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.542 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:21.801 [2024-10-15 09:17:05.472973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:21.801 "name": "raid_bdev1", 00:17:21.801 "aliases": [ 00:17:21.801 "3bae0db3-d1e6-43df-bd47-f0a9228e3da7" 00:17:21.801 ], 00:17:21.801 "product_name": "Raid Volume", 00:17:21.801 "block_size": 512, 00:17:21.801 "num_blocks": 253952, 00:17:21.801 "uuid": "3bae0db3-d1e6-43df-bd47-f0a9228e3da7", 00:17:21.801 "assigned_rate_limits": { 00:17:21.801 "rw_ios_per_sec": 0, 00:17:21.801 "rw_mbytes_per_sec": 0, 00:17:21.801 "r_mbytes_per_sec": 0, 00:17:21.801 "w_mbytes_per_sec": 0 00:17:21.801 }, 00:17:21.801 "claimed": false, 00:17:21.801 "zoned": false, 00:17:21.801 "supported_io_types": { 00:17:21.801 "read": true, 00:17:21.801 "write": true, 00:17:21.801 "unmap": true, 00:17:21.801 "flush": true, 00:17:21.801 "reset": true, 00:17:21.801 "nvme_admin": false, 00:17:21.801 "nvme_io": false, 00:17:21.801 "nvme_io_md": false, 00:17:21.801 "write_zeroes": true, 00:17:21.801 "zcopy": false, 00:17:21.801 "get_zone_info": false, 00:17:21.801 "zone_management": false, 00:17:21.801 "zone_append": false, 00:17:21.801 "compare": false, 00:17:21.801 "compare_and_write": false, 00:17:21.801 "abort": false, 00:17:21.801 "seek_hole": false, 00:17:21.801 "seek_data": false, 00:17:21.801 "copy": false, 00:17:21.801 "nvme_iov_md": false 00:17:21.801 }, 00:17:21.801 "memory_domains": [ 00:17:21.801 { 00:17:21.801 "dma_device_id": "system", 00:17:21.801 "dma_device_type": 1 00:17:21.801 }, 00:17:21.801 { 00:17:21.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.801 "dma_device_type": 2 00:17:21.801 }, 00:17:21.801 { 00:17:21.801 "dma_device_id": "system", 00:17:21.801 "dma_device_type": 1 00:17:21.801 }, 00:17:21.801 { 00:17:21.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.801 "dma_device_type": 2 00:17:21.801 }, 00:17:21.801 { 00:17:21.801 "dma_device_id": "system", 00:17:21.801 "dma_device_type": 1 00:17:21.801 }, 00:17:21.801 { 00:17:21.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.801 "dma_device_type": 2 00:17:21.801 }, 00:17:21.801 { 00:17:21.801 "dma_device_id": "system", 00:17:21.801 "dma_device_type": 1 00:17:21.801 }, 00:17:21.801 { 00:17:21.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.801 "dma_device_type": 2 00:17:21.801 } 00:17:21.801 ], 00:17:21.801 "driver_specific": { 00:17:21.801 "raid": { 00:17:21.801 "uuid": "3bae0db3-d1e6-43df-bd47-f0a9228e3da7", 00:17:21.801 "strip_size_kb": 64, 00:17:21.801 "state": "online", 00:17:21.801 "raid_level": "concat", 00:17:21.801 "superblock": true, 00:17:21.801 "num_base_bdevs": 4, 00:17:21.801 "num_base_bdevs_discovered": 4, 00:17:21.801 "num_base_bdevs_operational": 4, 00:17:21.801 "base_bdevs_list": [ 00:17:21.801 { 00:17:21.801 "name": "pt1", 00:17:21.801 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:21.801 "is_configured": true, 00:17:21.801 "data_offset": 2048, 00:17:21.801 "data_size": 63488 00:17:21.801 }, 00:17:21.801 { 00:17:21.801 "name": "pt2", 00:17:21.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.801 "is_configured": true, 00:17:21.801 "data_offset": 2048, 00:17:21.801 "data_size": 63488 00:17:21.801 }, 00:17:21.801 { 00:17:21.801 "name": "pt3", 00:17:21.801 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:21.801 "is_configured": true, 00:17:21.801 "data_offset": 2048, 00:17:21.801 "data_size": 63488 00:17:21.801 }, 00:17:21.801 { 00:17:21.801 "name": "pt4", 00:17:21.801 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:21.801 "is_configured": true, 00:17:21.801 "data_offset": 2048, 00:17:21.801 "data_size": 63488 00:17:21.801 } 00:17:21.801 ] 00:17:21.801 } 00:17:21.801 } 00:17:21.801 }' 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:21.801 pt2 00:17:21.801 pt3 00:17:21.801 pt4' 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.801 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.802 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:22.060 [2024-10-15 09:17:05.808994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3bae0db3-d1e6-43df-bd47-f0a9228e3da7 '!=' 3bae0db3-d1e6-43df-bd47-f0a9228e3da7 ']' 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72996 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72996 ']' 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72996 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:17:22.060 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:22.061 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72996 00:17:22.061 killing process with pid 72996 00:17:22.061 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:22.061 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:22.061 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72996' 00:17:22.061 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72996 00:17:22.061 [2024-10-15 09:17:05.893045] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:22.061 [2024-10-15 09:17:05.893185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.061 09:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72996 00:17:22.061 [2024-10-15 09:17:05.893298] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.061 [2024-10-15 09:17:05.893315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:22.628 [2024-10-15 09:17:06.278064] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:23.564 09:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:23.564 00:17:23.564 real 0m6.139s 00:17:23.564 user 0m9.112s 00:17:23.564 sys 0m0.983s 00:17:23.564 09:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:23.564 09:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.564 ************************************ 00:17:23.564 END TEST raid_superblock_test 00:17:23.564 ************************************ 00:17:23.564 09:17:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:17:23.564 09:17:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:23.564 09:17:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:23.564 09:17:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.564 ************************************ 00:17:23.564 START TEST raid_read_error_test 00:17:23.564 ************************************ 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Z206OPtJnj 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73261 00:17:23.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73261 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73261 ']' 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.564 09:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:23.565 09:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.565 09:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:23.565 09:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.823 [2024-10-15 09:17:07.550715] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:17:23.823 [2024-10-15 09:17:07.551060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73261 ] 00:17:23.823 [2024-10-15 09:17:07.722254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.082 [2024-10-15 09:17:07.869734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.340 [2024-10-15 09:17:08.091515] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.340 [2024-10-15 09:17:08.091617] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.908 BaseBdev1_malloc 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.908 true 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.908 [2024-10-15 09:17:08.585915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:24.908 [2024-10-15 09:17:08.586166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.908 [2024-10-15 09:17:08.586211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:24.908 [2024-10-15 09:17:08.586231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.908 [2024-10-15 09:17:08.589310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.908 [2024-10-15 09:17:08.589360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:24.908 BaseBdev1 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.908 BaseBdev2_malloc 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.908 true 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.908 [2024-10-15 09:17:08.649593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:24.908 [2024-10-15 09:17:08.649669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.908 [2024-10-15 09:17:08.649697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:24.908 [2024-10-15 09:17:08.649715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.908 [2024-10-15 09:17:08.652666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.908 [2024-10-15 09:17:08.652717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:24.908 BaseBdev2 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.908 BaseBdev3_malloc 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.908 true 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.908 [2024-10-15 09:17:08.723850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:24.908 [2024-10-15 09:17:08.724053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.908 [2024-10-15 09:17:08.724142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:24.908 [2024-10-15 09:17:08.724263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.908 [2024-10-15 09:17:08.727364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.908 [2024-10-15 09:17:08.727533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:24.908 BaseBdev3 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.908 BaseBdev4_malloc 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.908 true 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.908 [2024-10-15 09:17:08.787471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:24.908 [2024-10-15 09:17:08.787672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.908 [2024-10-15 09:17:08.787744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:24.908 [2024-10-15 09:17:08.787879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.908 [2024-10-15 09:17:08.790840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.908 BaseBdev4 00:17:24.908 [2024-10-15 09:17:08.791004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.908 [2024-10-15 09:17:08.795716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.908 [2024-10-15 09:17:08.798308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:24.908 [2024-10-15 09:17:08.798426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:24.908 [2024-10-15 09:17:08.798527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:24.908 [2024-10-15 09:17:08.798825] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:24.908 [2024-10-15 09:17:08.798850] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:24.908 [2024-10-15 09:17:08.799189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:24.908 [2024-10-15 09:17:08.799412] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:24.908 [2024-10-15 09:17:08.799433] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:24.908 [2024-10-15 09:17:08.799679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:24.908 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.909 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.909 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.909 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.909 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.909 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.909 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.909 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.909 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.167 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.167 "name": "raid_bdev1", 00:17:25.167 "uuid": "7ccc4450-42e1-4476-ae01-280b5f2f7ffa", 00:17:25.167 "strip_size_kb": 64, 00:17:25.167 "state": "online", 00:17:25.167 "raid_level": "concat", 00:17:25.167 "superblock": true, 00:17:25.167 "num_base_bdevs": 4, 00:17:25.167 "num_base_bdevs_discovered": 4, 00:17:25.167 "num_base_bdevs_operational": 4, 00:17:25.167 "base_bdevs_list": [ 00:17:25.167 { 00:17:25.167 "name": "BaseBdev1", 00:17:25.167 "uuid": "66cfd023-65fc-5bc9-a9cb-ae661c853db2", 00:17:25.167 "is_configured": true, 00:17:25.167 "data_offset": 2048, 00:17:25.167 "data_size": 63488 00:17:25.167 }, 00:17:25.167 { 00:17:25.167 "name": "BaseBdev2", 00:17:25.167 "uuid": "2144ecd9-2ddc-5551-9ba0-7d427b97c766", 00:17:25.167 "is_configured": true, 00:17:25.167 "data_offset": 2048, 00:17:25.167 "data_size": 63488 00:17:25.167 }, 00:17:25.167 { 00:17:25.167 "name": "BaseBdev3", 00:17:25.167 "uuid": "4ce915e0-0b4d-5e72-8bf6-c843d792f3a8", 00:17:25.167 "is_configured": true, 00:17:25.167 "data_offset": 2048, 00:17:25.167 "data_size": 63488 00:17:25.167 }, 00:17:25.167 { 00:17:25.167 "name": "BaseBdev4", 00:17:25.167 "uuid": "f7768729-30f1-5b5d-afaa-8c3d06680379", 00:17:25.167 "is_configured": true, 00:17:25.167 "data_offset": 2048, 00:17:25.167 "data_size": 63488 00:17:25.167 } 00:17:25.167 ] 00:17:25.167 }' 00:17:25.167 09:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.167 09:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.426 09:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:25.426 09:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:25.685 [2024-10-15 09:17:09.453440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.620 "name": "raid_bdev1", 00:17:26.620 "uuid": "7ccc4450-42e1-4476-ae01-280b5f2f7ffa", 00:17:26.620 "strip_size_kb": 64, 00:17:26.620 "state": "online", 00:17:26.620 "raid_level": "concat", 00:17:26.620 "superblock": true, 00:17:26.620 "num_base_bdevs": 4, 00:17:26.620 "num_base_bdevs_discovered": 4, 00:17:26.620 "num_base_bdevs_operational": 4, 00:17:26.620 "base_bdevs_list": [ 00:17:26.620 { 00:17:26.620 "name": "BaseBdev1", 00:17:26.620 "uuid": "66cfd023-65fc-5bc9-a9cb-ae661c853db2", 00:17:26.620 "is_configured": true, 00:17:26.620 "data_offset": 2048, 00:17:26.620 "data_size": 63488 00:17:26.620 }, 00:17:26.620 { 00:17:26.620 "name": "BaseBdev2", 00:17:26.620 "uuid": "2144ecd9-2ddc-5551-9ba0-7d427b97c766", 00:17:26.620 "is_configured": true, 00:17:26.620 "data_offset": 2048, 00:17:26.620 "data_size": 63488 00:17:26.620 }, 00:17:26.620 { 00:17:26.620 "name": "BaseBdev3", 00:17:26.620 "uuid": "4ce915e0-0b4d-5e72-8bf6-c843d792f3a8", 00:17:26.620 "is_configured": true, 00:17:26.620 "data_offset": 2048, 00:17:26.620 "data_size": 63488 00:17:26.620 }, 00:17:26.620 { 00:17:26.620 "name": "BaseBdev4", 00:17:26.620 "uuid": "f7768729-30f1-5b5d-afaa-8c3d06680379", 00:17:26.620 "is_configured": true, 00:17:26.620 "data_offset": 2048, 00:17:26.620 "data_size": 63488 00:17:26.620 } 00:17:26.620 ] 00:17:26.620 }' 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.620 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.187 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:27.187 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.187 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.187 [2024-10-15 09:17:10.837819] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:27.187 [2024-10-15 09:17:10.837867] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:27.187 [2024-10-15 09:17:10.841254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.187 [2024-10-15 09:17:10.841339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.187 [2024-10-15 09:17:10.841406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:27.187 [2024-10-15 09:17:10.841430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:27.187 { 00:17:27.187 "results": [ 00:17:27.187 { 00:17:27.187 "job": "raid_bdev1", 00:17:27.187 "core_mask": "0x1", 00:17:27.187 "workload": "randrw", 00:17:27.187 "percentage": 50, 00:17:27.187 "status": "finished", 00:17:27.187 "queue_depth": 1, 00:17:27.187 "io_size": 131072, 00:17:27.187 "runtime": 1.381426, 00:17:27.187 "iops": 9992.5728920695, 00:17:27.187 "mibps": 1249.0716115086875, 00:17:27.187 "io_failed": 1, 00:17:27.187 "io_timeout": 0, 00:17:27.187 "avg_latency_us": 140.94383405222086, 00:17:27.187 "min_latency_us": 43.75272727272727, 00:17:27.187 "max_latency_us": 1876.7127272727273 00:17:27.187 } 00:17:27.187 ], 00:17:27.187 "core_count": 1 00:17:27.187 } 00:17:27.187 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.188 09:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73261 00:17:27.188 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73261 ']' 00:17:27.188 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73261 00:17:27.188 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:17:27.188 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:27.188 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73261 00:17:27.188 killing process with pid 73261 00:17:27.188 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:27.188 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:27.188 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73261' 00:17:27.188 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73261 00:17:27.188 [2024-10-15 09:17:10.879877] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:27.188 09:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73261 00:17:27.445 [2024-10-15 09:17:11.194314] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:28.846 09:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Z206OPtJnj 00:17:28.846 09:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:28.846 09:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:28.846 09:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:17:28.846 09:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:17:28.846 ************************************ 00:17:28.846 END TEST raid_read_error_test 00:17:28.846 ************************************ 00:17:28.846 09:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:28.846 09:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:28.846 09:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:17:28.846 00:17:28.846 real 0m4.935s 00:17:28.846 user 0m5.983s 00:17:28.846 sys 0m0.661s 00:17:28.846 09:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:28.846 09:17:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.846 09:17:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:17:28.846 09:17:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:28.846 09:17:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:28.846 09:17:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:28.846 ************************************ 00:17:28.846 START TEST raid_write_error_test 00:17:28.846 ************************************ 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5D703PAWvp 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73412 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73412 00:17:28.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73412 ']' 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:28.846 09:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.846 [2024-10-15 09:17:12.551944] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:17:28.846 [2024-10-15 09:17:12.552183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73412 ] 00:17:28.846 [2024-10-15 09:17:12.731416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.105 [2024-10-15 09:17:12.901299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.362 [2024-10-15 09:17:13.173765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.362 [2024-10-15 09:17:13.173820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.620 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:29.620 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:17:29.620 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:29.620 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:29.620 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.620 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.879 BaseBdev1_malloc 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.879 true 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.879 [2024-10-15 09:17:13.590676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:29.879 [2024-10-15 09:17:13.590753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.879 [2024-10-15 09:17:13.590786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:29.879 [2024-10-15 09:17:13.590806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.879 [2024-10-15 09:17:13.593810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.879 [2024-10-15 09:17:13.593864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:29.879 BaseBdev1 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.879 BaseBdev2_malloc 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.879 true 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.879 [2024-10-15 09:17:13.662022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:29.879 [2024-10-15 09:17:13.662112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.879 [2024-10-15 09:17:13.662158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:29.879 [2024-10-15 09:17:13.662179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.879 [2024-10-15 09:17:13.665144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.879 [2024-10-15 09:17:13.665192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:29.879 BaseBdev2 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.879 BaseBdev3_malloc 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.879 true 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.879 [2024-10-15 09:17:13.736500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:29.879 [2024-10-15 09:17:13.736711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.879 [2024-10-15 09:17:13.736760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:29.879 [2024-10-15 09:17:13.736781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.879 [2024-10-15 09:17:13.739771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.879 [2024-10-15 09:17:13.739942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:29.879 BaseBdev3 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.879 BaseBdev4_malloc 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.879 true 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.879 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.137 [2024-10-15 09:17:13.808305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:30.137 [2024-10-15 09:17:13.808397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.137 [2024-10-15 09:17:13.808435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:30.137 [2024-10-15 09:17:13.808458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.137 [2024-10-15 09:17:13.811632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.137 [2024-10-15 09:17:13.811689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:30.137 BaseBdev4 00:17:30.137 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.138 [2024-10-15 09:17:13.820486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.138 [2024-10-15 09:17:13.823165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:30.138 [2024-10-15 09:17:13.823285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:30.138 [2024-10-15 09:17:13.823391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:30.138 [2024-10-15 09:17:13.823710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:30.138 [2024-10-15 09:17:13.823742] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:30.138 [2024-10-15 09:17:13.824110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:30.138 [2024-10-15 09:17:13.824378] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:30.138 [2024-10-15 09:17:13.824396] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:30.138 [2024-10-15 09:17:13.824688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.138 "name": "raid_bdev1", 00:17:30.138 "uuid": "e5ac796f-ecaa-413c-a185-d5fa77beb884", 00:17:30.138 "strip_size_kb": 64, 00:17:30.138 "state": "online", 00:17:30.138 "raid_level": "concat", 00:17:30.138 "superblock": true, 00:17:30.138 "num_base_bdevs": 4, 00:17:30.138 "num_base_bdevs_discovered": 4, 00:17:30.138 "num_base_bdevs_operational": 4, 00:17:30.138 "base_bdevs_list": [ 00:17:30.138 { 00:17:30.138 "name": "BaseBdev1", 00:17:30.138 "uuid": "f8aadf7e-cef2-5841-adf0-6e6f8fe3b556", 00:17:30.138 "is_configured": true, 00:17:30.138 "data_offset": 2048, 00:17:30.138 "data_size": 63488 00:17:30.138 }, 00:17:30.138 { 00:17:30.138 "name": "BaseBdev2", 00:17:30.138 "uuid": "c11aae0e-2761-514a-8d1d-f78e347df4f6", 00:17:30.138 "is_configured": true, 00:17:30.138 "data_offset": 2048, 00:17:30.138 "data_size": 63488 00:17:30.138 }, 00:17:30.138 { 00:17:30.138 "name": "BaseBdev3", 00:17:30.138 "uuid": "94bfd2d2-89db-5271-87b6-b239584ae75d", 00:17:30.138 "is_configured": true, 00:17:30.138 "data_offset": 2048, 00:17:30.138 "data_size": 63488 00:17:30.138 }, 00:17:30.138 { 00:17:30.138 "name": "BaseBdev4", 00:17:30.138 "uuid": "b7fdfab1-06bc-5db4-b329-e11c7cbf760b", 00:17:30.138 "is_configured": true, 00:17:30.138 "data_offset": 2048, 00:17:30.138 "data_size": 63488 00:17:30.138 } 00:17:30.138 ] 00:17:30.138 }' 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.138 09:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.704 09:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:30.704 09:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:30.704 [2024-10-15 09:17:14.526341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.636 "name": "raid_bdev1", 00:17:31.636 "uuid": "e5ac796f-ecaa-413c-a185-d5fa77beb884", 00:17:31.636 "strip_size_kb": 64, 00:17:31.636 "state": "online", 00:17:31.636 "raid_level": "concat", 00:17:31.636 "superblock": true, 00:17:31.636 "num_base_bdevs": 4, 00:17:31.636 "num_base_bdevs_discovered": 4, 00:17:31.636 "num_base_bdevs_operational": 4, 00:17:31.636 "base_bdevs_list": [ 00:17:31.636 { 00:17:31.636 "name": "BaseBdev1", 00:17:31.636 "uuid": "f8aadf7e-cef2-5841-adf0-6e6f8fe3b556", 00:17:31.636 "is_configured": true, 00:17:31.636 "data_offset": 2048, 00:17:31.636 "data_size": 63488 00:17:31.636 }, 00:17:31.636 { 00:17:31.636 "name": "BaseBdev2", 00:17:31.636 "uuid": "c11aae0e-2761-514a-8d1d-f78e347df4f6", 00:17:31.636 "is_configured": true, 00:17:31.636 "data_offset": 2048, 00:17:31.636 "data_size": 63488 00:17:31.636 }, 00:17:31.636 { 00:17:31.636 "name": "BaseBdev3", 00:17:31.636 "uuid": "94bfd2d2-89db-5271-87b6-b239584ae75d", 00:17:31.636 "is_configured": true, 00:17:31.636 "data_offset": 2048, 00:17:31.636 "data_size": 63488 00:17:31.636 }, 00:17:31.636 { 00:17:31.636 "name": "BaseBdev4", 00:17:31.636 "uuid": "b7fdfab1-06bc-5db4-b329-e11c7cbf760b", 00:17:31.636 "is_configured": true, 00:17:31.636 "data_offset": 2048, 00:17:31.636 "data_size": 63488 00:17:31.636 } 00:17:31.636 ] 00:17:31.636 }' 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.636 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.203 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:32.203 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.203 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.204 [2024-10-15 09:17:15.899895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.204 [2024-10-15 09:17:15.900076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.204 [2024-10-15 09:17:15.903780] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.204 [2024-10-15 09:17:15.903932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.204 [2024-10-15 09:17:15.904003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.204 [2024-10-15 09:17:15.904026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:32.204 { 00:17:32.204 "results": [ 00:17:32.204 { 00:17:32.204 "job": "raid_bdev1", 00:17:32.204 "core_mask": "0x1", 00:17:32.204 "workload": "randrw", 00:17:32.204 "percentage": 50, 00:17:32.204 "status": "finished", 00:17:32.204 "queue_depth": 1, 00:17:32.204 "io_size": 131072, 00:17:32.204 "runtime": 1.370991, 00:17:32.204 "iops": 9962.866277021512, 00:17:32.204 "mibps": 1245.358284627689, 00:17:32.204 "io_failed": 1, 00:17:32.204 "io_timeout": 0, 00:17:32.204 "avg_latency_us": 141.44915746040198, 00:17:32.204 "min_latency_us": 43.054545454545455, 00:17:32.204 "max_latency_us": 1869.2654545454545 00:17:32.204 } 00:17:32.204 ], 00:17:32.204 "core_count": 1 00:17:32.204 } 00:17:32.204 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.204 09:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73412 00:17:32.204 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73412 ']' 00:17:32.204 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73412 00:17:32.204 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:17:32.204 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:32.204 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73412 00:17:32.204 killing process with pid 73412 00:17:32.204 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:32.204 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:32.204 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73412' 00:17:32.204 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73412 00:17:32.204 09:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73412 00:17:32.204 [2024-10-15 09:17:15.942974] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:32.466 [2024-10-15 09:17:16.259503] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:33.844 09:17:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5D703PAWvp 00:17:33.844 09:17:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:33.844 09:17:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:33.844 09:17:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:17:33.844 09:17:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:17:33.844 09:17:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:33.844 09:17:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:33.844 09:17:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:17:33.844 00:17:33.844 real 0m5.020s 00:17:33.844 user 0m6.152s 00:17:33.844 sys 0m0.652s 00:17:33.844 ************************************ 00:17:33.844 END TEST raid_write_error_test 00:17:33.844 ************************************ 00:17:33.844 09:17:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:33.844 09:17:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.844 09:17:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:17:33.844 09:17:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:17:33.844 09:17:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:33.844 09:17:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:33.844 09:17:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:33.844 ************************************ 00:17:33.844 START TEST raid_state_function_test 00:17:33.844 ************************************ 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:33.844 Process raid pid: 73556 00:17:33.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73556 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73556' 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73556 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73556 ']' 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:33.844 09:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.844 [2024-10-15 09:17:17.602054] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:17:33.844 [2024-10-15 09:17:17.602696] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.102 [2024-10-15 09:17:17.778089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.102 [2024-10-15 09:17:17.976495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.359 [2024-10-15 09:17:18.202679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.359 [2024-10-15 09:17:18.202749] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.925 09:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:34.925 09:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:17:34.925 09:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.926 [2024-10-15 09:17:18.658357] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:34.926 [2024-10-15 09:17:18.658429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:34.926 [2024-10-15 09:17:18.658448] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:34.926 [2024-10-15 09:17:18.658465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:34.926 [2024-10-15 09:17:18.658475] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:34.926 [2024-10-15 09:17:18.658489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:34.926 [2024-10-15 09:17:18.658499] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:34.926 [2024-10-15 09:17:18.658514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.926 "name": "Existed_Raid", 00:17:34.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.926 "strip_size_kb": 0, 00:17:34.926 "state": "configuring", 00:17:34.926 "raid_level": "raid1", 00:17:34.926 "superblock": false, 00:17:34.926 "num_base_bdevs": 4, 00:17:34.926 "num_base_bdevs_discovered": 0, 00:17:34.926 "num_base_bdevs_operational": 4, 00:17:34.926 "base_bdevs_list": [ 00:17:34.926 { 00:17:34.926 "name": "BaseBdev1", 00:17:34.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.926 "is_configured": false, 00:17:34.926 "data_offset": 0, 00:17:34.926 "data_size": 0 00:17:34.926 }, 00:17:34.926 { 00:17:34.926 "name": "BaseBdev2", 00:17:34.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.926 "is_configured": false, 00:17:34.926 "data_offset": 0, 00:17:34.926 "data_size": 0 00:17:34.926 }, 00:17:34.926 { 00:17:34.926 "name": "BaseBdev3", 00:17:34.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.926 "is_configured": false, 00:17:34.926 "data_offset": 0, 00:17:34.926 "data_size": 0 00:17:34.926 }, 00:17:34.926 { 00:17:34.926 "name": "BaseBdev4", 00:17:34.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.926 "is_configured": false, 00:17:34.926 "data_offset": 0, 00:17:34.926 "data_size": 0 00:17:34.926 } 00:17:34.926 ] 00:17:34.926 }' 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.926 09:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.493 [2024-10-15 09:17:19.202536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:35.493 [2024-10-15 09:17:19.202591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.493 [2024-10-15 09:17:19.210566] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:35.493 [2024-10-15 09:17:19.210628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:35.493 [2024-10-15 09:17:19.210645] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.493 [2024-10-15 09:17:19.210661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.493 [2024-10-15 09:17:19.210671] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:35.493 [2024-10-15 09:17:19.210685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:35.493 [2024-10-15 09:17:19.210695] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:35.493 [2024-10-15 09:17:19.210710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.493 [2024-10-15 09:17:19.258887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.493 BaseBdev1 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.493 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.494 [ 00:17:35.494 { 00:17:35.494 "name": "BaseBdev1", 00:17:35.494 "aliases": [ 00:17:35.494 "b95e8c06-558d-432e-bf18-17f4298d623e" 00:17:35.494 ], 00:17:35.494 "product_name": "Malloc disk", 00:17:35.494 "block_size": 512, 00:17:35.494 "num_blocks": 65536, 00:17:35.494 "uuid": "b95e8c06-558d-432e-bf18-17f4298d623e", 00:17:35.494 "assigned_rate_limits": { 00:17:35.494 "rw_ios_per_sec": 0, 00:17:35.494 "rw_mbytes_per_sec": 0, 00:17:35.494 "r_mbytes_per_sec": 0, 00:17:35.494 "w_mbytes_per_sec": 0 00:17:35.494 }, 00:17:35.494 "claimed": true, 00:17:35.494 "claim_type": "exclusive_write", 00:17:35.494 "zoned": false, 00:17:35.494 "supported_io_types": { 00:17:35.494 "read": true, 00:17:35.494 "write": true, 00:17:35.494 "unmap": true, 00:17:35.494 "flush": true, 00:17:35.494 "reset": true, 00:17:35.494 "nvme_admin": false, 00:17:35.494 "nvme_io": false, 00:17:35.494 "nvme_io_md": false, 00:17:35.494 "write_zeroes": true, 00:17:35.494 "zcopy": true, 00:17:35.494 "get_zone_info": false, 00:17:35.494 "zone_management": false, 00:17:35.494 "zone_append": false, 00:17:35.494 "compare": false, 00:17:35.494 "compare_and_write": false, 00:17:35.494 "abort": true, 00:17:35.494 "seek_hole": false, 00:17:35.494 "seek_data": false, 00:17:35.494 "copy": true, 00:17:35.494 "nvme_iov_md": false 00:17:35.494 }, 00:17:35.494 "memory_domains": [ 00:17:35.494 { 00:17:35.494 "dma_device_id": "system", 00:17:35.494 "dma_device_type": 1 00:17:35.494 }, 00:17:35.494 { 00:17:35.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.494 "dma_device_type": 2 00:17:35.494 } 00:17:35.494 ], 00:17:35.494 "driver_specific": {} 00:17:35.494 } 00:17:35.494 ] 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.494 "name": "Existed_Raid", 00:17:35.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.494 "strip_size_kb": 0, 00:17:35.494 "state": "configuring", 00:17:35.494 "raid_level": "raid1", 00:17:35.494 "superblock": false, 00:17:35.494 "num_base_bdevs": 4, 00:17:35.494 "num_base_bdevs_discovered": 1, 00:17:35.494 "num_base_bdevs_operational": 4, 00:17:35.494 "base_bdevs_list": [ 00:17:35.494 { 00:17:35.494 "name": "BaseBdev1", 00:17:35.494 "uuid": "b95e8c06-558d-432e-bf18-17f4298d623e", 00:17:35.494 "is_configured": true, 00:17:35.494 "data_offset": 0, 00:17:35.494 "data_size": 65536 00:17:35.494 }, 00:17:35.494 { 00:17:35.494 "name": "BaseBdev2", 00:17:35.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.494 "is_configured": false, 00:17:35.494 "data_offset": 0, 00:17:35.494 "data_size": 0 00:17:35.494 }, 00:17:35.494 { 00:17:35.494 "name": "BaseBdev3", 00:17:35.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.494 "is_configured": false, 00:17:35.494 "data_offset": 0, 00:17:35.494 "data_size": 0 00:17:35.494 }, 00:17:35.494 { 00:17:35.494 "name": "BaseBdev4", 00:17:35.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.494 "is_configured": false, 00:17:35.494 "data_offset": 0, 00:17:35.494 "data_size": 0 00:17:35.494 } 00:17:35.494 ] 00:17:35.494 }' 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.494 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.150 [2024-10-15 09:17:19.835130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:36.150 [2024-10-15 09:17:19.835204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.150 [2024-10-15 09:17:19.847251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:36.150 [2024-10-15 09:17:19.850037] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:36.150 [2024-10-15 09:17:19.850231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:36.150 [2024-10-15 09:17:19.850390] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:36.150 [2024-10-15 09:17:19.850551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:36.150 [2024-10-15 09:17:19.850663] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:36.150 [2024-10-15 09:17:19.850722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.150 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.150 "name": "Existed_Raid", 00:17:36.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.150 "strip_size_kb": 0, 00:17:36.150 "state": "configuring", 00:17:36.150 "raid_level": "raid1", 00:17:36.150 "superblock": false, 00:17:36.150 "num_base_bdevs": 4, 00:17:36.150 "num_base_bdevs_discovered": 1, 00:17:36.150 "num_base_bdevs_operational": 4, 00:17:36.150 "base_bdevs_list": [ 00:17:36.150 { 00:17:36.150 "name": "BaseBdev1", 00:17:36.150 "uuid": "b95e8c06-558d-432e-bf18-17f4298d623e", 00:17:36.150 "is_configured": true, 00:17:36.150 "data_offset": 0, 00:17:36.150 "data_size": 65536 00:17:36.150 }, 00:17:36.150 { 00:17:36.150 "name": "BaseBdev2", 00:17:36.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.150 "is_configured": false, 00:17:36.150 "data_offset": 0, 00:17:36.150 "data_size": 0 00:17:36.150 }, 00:17:36.150 { 00:17:36.150 "name": "BaseBdev3", 00:17:36.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.150 "is_configured": false, 00:17:36.150 "data_offset": 0, 00:17:36.150 "data_size": 0 00:17:36.150 }, 00:17:36.150 { 00:17:36.150 "name": "BaseBdev4", 00:17:36.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.150 "is_configured": false, 00:17:36.150 "data_offset": 0, 00:17:36.150 "data_size": 0 00:17:36.150 } 00:17:36.150 ] 00:17:36.150 }' 00:17:36.151 09:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.151 09:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.718 [2024-10-15 09:17:20.417272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:36.718 BaseBdev2 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.718 [ 00:17:36.718 { 00:17:36.718 "name": "BaseBdev2", 00:17:36.718 "aliases": [ 00:17:36.718 "943522f9-5a3c-412d-98d9-269e3a4caed0" 00:17:36.718 ], 00:17:36.718 "product_name": "Malloc disk", 00:17:36.718 "block_size": 512, 00:17:36.718 "num_blocks": 65536, 00:17:36.718 "uuid": "943522f9-5a3c-412d-98d9-269e3a4caed0", 00:17:36.718 "assigned_rate_limits": { 00:17:36.718 "rw_ios_per_sec": 0, 00:17:36.718 "rw_mbytes_per_sec": 0, 00:17:36.718 "r_mbytes_per_sec": 0, 00:17:36.718 "w_mbytes_per_sec": 0 00:17:36.718 }, 00:17:36.718 "claimed": true, 00:17:36.718 "claim_type": "exclusive_write", 00:17:36.718 "zoned": false, 00:17:36.718 "supported_io_types": { 00:17:36.718 "read": true, 00:17:36.718 "write": true, 00:17:36.718 "unmap": true, 00:17:36.718 "flush": true, 00:17:36.718 "reset": true, 00:17:36.718 "nvme_admin": false, 00:17:36.718 "nvme_io": false, 00:17:36.718 "nvme_io_md": false, 00:17:36.718 "write_zeroes": true, 00:17:36.718 "zcopy": true, 00:17:36.718 "get_zone_info": false, 00:17:36.718 "zone_management": false, 00:17:36.718 "zone_append": false, 00:17:36.718 "compare": false, 00:17:36.718 "compare_and_write": false, 00:17:36.718 "abort": true, 00:17:36.718 "seek_hole": false, 00:17:36.718 "seek_data": false, 00:17:36.718 "copy": true, 00:17:36.718 "nvme_iov_md": false 00:17:36.718 }, 00:17:36.718 "memory_domains": [ 00:17:36.718 { 00:17:36.718 "dma_device_id": "system", 00:17:36.718 "dma_device_type": 1 00:17:36.718 }, 00:17:36.718 { 00:17:36.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.718 "dma_device_type": 2 00:17:36.718 } 00:17:36.718 ], 00:17:36.718 "driver_specific": {} 00:17:36.718 } 00:17:36.718 ] 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.718 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.718 "name": "Existed_Raid", 00:17:36.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.719 "strip_size_kb": 0, 00:17:36.719 "state": "configuring", 00:17:36.719 "raid_level": "raid1", 00:17:36.719 "superblock": false, 00:17:36.719 "num_base_bdevs": 4, 00:17:36.719 "num_base_bdevs_discovered": 2, 00:17:36.719 "num_base_bdevs_operational": 4, 00:17:36.719 "base_bdevs_list": [ 00:17:36.719 { 00:17:36.719 "name": "BaseBdev1", 00:17:36.719 "uuid": "b95e8c06-558d-432e-bf18-17f4298d623e", 00:17:36.719 "is_configured": true, 00:17:36.719 "data_offset": 0, 00:17:36.719 "data_size": 65536 00:17:36.719 }, 00:17:36.719 { 00:17:36.719 "name": "BaseBdev2", 00:17:36.719 "uuid": "943522f9-5a3c-412d-98d9-269e3a4caed0", 00:17:36.719 "is_configured": true, 00:17:36.719 "data_offset": 0, 00:17:36.719 "data_size": 65536 00:17:36.719 }, 00:17:36.719 { 00:17:36.719 "name": "BaseBdev3", 00:17:36.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.719 "is_configured": false, 00:17:36.719 "data_offset": 0, 00:17:36.719 "data_size": 0 00:17:36.719 }, 00:17:36.719 { 00:17:36.719 "name": "BaseBdev4", 00:17:36.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.719 "is_configured": false, 00:17:36.719 "data_offset": 0, 00:17:36.719 "data_size": 0 00:17:36.719 } 00:17:36.719 ] 00:17:36.719 }' 00:17:36.719 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.719 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.287 09:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:37.287 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.287 09:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.287 [2024-10-15 09:17:21.032917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:37.287 BaseBdev3 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.287 [ 00:17:37.287 { 00:17:37.287 "name": "BaseBdev3", 00:17:37.287 "aliases": [ 00:17:37.287 "461c89af-a81e-45f7-9f4f-9ab7991823d6" 00:17:37.287 ], 00:17:37.287 "product_name": "Malloc disk", 00:17:37.287 "block_size": 512, 00:17:37.287 "num_blocks": 65536, 00:17:37.287 "uuid": "461c89af-a81e-45f7-9f4f-9ab7991823d6", 00:17:37.287 "assigned_rate_limits": { 00:17:37.287 "rw_ios_per_sec": 0, 00:17:37.287 "rw_mbytes_per_sec": 0, 00:17:37.287 "r_mbytes_per_sec": 0, 00:17:37.287 "w_mbytes_per_sec": 0 00:17:37.287 }, 00:17:37.287 "claimed": true, 00:17:37.287 "claim_type": "exclusive_write", 00:17:37.287 "zoned": false, 00:17:37.287 "supported_io_types": { 00:17:37.287 "read": true, 00:17:37.287 "write": true, 00:17:37.287 "unmap": true, 00:17:37.287 "flush": true, 00:17:37.287 "reset": true, 00:17:37.287 "nvme_admin": false, 00:17:37.287 "nvme_io": false, 00:17:37.287 "nvme_io_md": false, 00:17:37.287 "write_zeroes": true, 00:17:37.287 "zcopy": true, 00:17:37.287 "get_zone_info": false, 00:17:37.287 "zone_management": false, 00:17:37.287 "zone_append": false, 00:17:37.287 "compare": false, 00:17:37.287 "compare_and_write": false, 00:17:37.287 "abort": true, 00:17:37.287 "seek_hole": false, 00:17:37.287 "seek_data": false, 00:17:37.287 "copy": true, 00:17:37.287 "nvme_iov_md": false 00:17:37.287 }, 00:17:37.287 "memory_domains": [ 00:17:37.287 { 00:17:37.287 "dma_device_id": "system", 00:17:37.287 "dma_device_type": 1 00:17:37.287 }, 00:17:37.287 { 00:17:37.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.287 "dma_device_type": 2 00:17:37.287 } 00:17:37.287 ], 00:17:37.287 "driver_specific": {} 00:17:37.287 } 00:17:37.287 ] 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.287 "name": "Existed_Raid", 00:17:37.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.287 "strip_size_kb": 0, 00:17:37.287 "state": "configuring", 00:17:37.287 "raid_level": "raid1", 00:17:37.287 "superblock": false, 00:17:37.287 "num_base_bdevs": 4, 00:17:37.287 "num_base_bdevs_discovered": 3, 00:17:37.287 "num_base_bdevs_operational": 4, 00:17:37.287 "base_bdevs_list": [ 00:17:37.287 { 00:17:37.287 "name": "BaseBdev1", 00:17:37.287 "uuid": "b95e8c06-558d-432e-bf18-17f4298d623e", 00:17:37.287 "is_configured": true, 00:17:37.287 "data_offset": 0, 00:17:37.287 "data_size": 65536 00:17:37.287 }, 00:17:37.287 { 00:17:37.287 "name": "BaseBdev2", 00:17:37.287 "uuid": "943522f9-5a3c-412d-98d9-269e3a4caed0", 00:17:37.287 "is_configured": true, 00:17:37.287 "data_offset": 0, 00:17:37.287 "data_size": 65536 00:17:37.287 }, 00:17:37.287 { 00:17:37.287 "name": "BaseBdev3", 00:17:37.287 "uuid": "461c89af-a81e-45f7-9f4f-9ab7991823d6", 00:17:37.287 "is_configured": true, 00:17:37.287 "data_offset": 0, 00:17:37.287 "data_size": 65536 00:17:37.287 }, 00:17:37.287 { 00:17:37.287 "name": "BaseBdev4", 00:17:37.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.287 "is_configured": false, 00:17:37.287 "data_offset": 0, 00:17:37.287 "data_size": 0 00:17:37.287 } 00:17:37.287 ] 00:17:37.287 }' 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.287 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.856 [2024-10-15 09:17:21.618711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:37.856 [2024-10-15 09:17:21.618808] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:37.856 [2024-10-15 09:17:21.618821] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:37.856 [2024-10-15 09:17:21.619224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:37.856 [2024-10-15 09:17:21.619499] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:37.856 [2024-10-15 09:17:21.619522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:37.856 [2024-10-15 09:17:21.619874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.856 BaseBdev4 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.856 [ 00:17:37.856 { 00:17:37.856 "name": "BaseBdev4", 00:17:37.856 "aliases": [ 00:17:37.856 "01e7cf42-9cc9-476f-8bd1-f1c5be7ddb72" 00:17:37.856 ], 00:17:37.856 "product_name": "Malloc disk", 00:17:37.856 "block_size": 512, 00:17:37.856 "num_blocks": 65536, 00:17:37.856 "uuid": "01e7cf42-9cc9-476f-8bd1-f1c5be7ddb72", 00:17:37.856 "assigned_rate_limits": { 00:17:37.856 "rw_ios_per_sec": 0, 00:17:37.856 "rw_mbytes_per_sec": 0, 00:17:37.856 "r_mbytes_per_sec": 0, 00:17:37.856 "w_mbytes_per_sec": 0 00:17:37.856 }, 00:17:37.856 "claimed": true, 00:17:37.856 "claim_type": "exclusive_write", 00:17:37.856 "zoned": false, 00:17:37.856 "supported_io_types": { 00:17:37.856 "read": true, 00:17:37.856 "write": true, 00:17:37.856 "unmap": true, 00:17:37.856 "flush": true, 00:17:37.856 "reset": true, 00:17:37.856 "nvme_admin": false, 00:17:37.856 "nvme_io": false, 00:17:37.856 "nvme_io_md": false, 00:17:37.856 "write_zeroes": true, 00:17:37.856 "zcopy": true, 00:17:37.856 "get_zone_info": false, 00:17:37.856 "zone_management": false, 00:17:37.856 "zone_append": false, 00:17:37.856 "compare": false, 00:17:37.856 "compare_and_write": false, 00:17:37.856 "abort": true, 00:17:37.856 "seek_hole": false, 00:17:37.856 "seek_data": false, 00:17:37.856 "copy": true, 00:17:37.856 "nvme_iov_md": false 00:17:37.856 }, 00:17:37.856 "memory_domains": [ 00:17:37.856 { 00:17:37.856 "dma_device_id": "system", 00:17:37.856 "dma_device_type": 1 00:17:37.856 }, 00:17:37.856 { 00:17:37.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.856 "dma_device_type": 2 00:17:37.856 } 00:17:37.856 ], 00:17:37.856 "driver_specific": {} 00:17:37.856 } 00:17:37.856 ] 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.856 "name": "Existed_Raid", 00:17:37.856 "uuid": "180d008b-fddd-4bc2-941c-aca2bcbe0b23", 00:17:37.856 "strip_size_kb": 0, 00:17:37.856 "state": "online", 00:17:37.856 "raid_level": "raid1", 00:17:37.856 "superblock": false, 00:17:37.856 "num_base_bdevs": 4, 00:17:37.856 "num_base_bdevs_discovered": 4, 00:17:37.856 "num_base_bdevs_operational": 4, 00:17:37.856 "base_bdevs_list": [ 00:17:37.856 { 00:17:37.856 "name": "BaseBdev1", 00:17:37.856 "uuid": "b95e8c06-558d-432e-bf18-17f4298d623e", 00:17:37.856 "is_configured": true, 00:17:37.856 "data_offset": 0, 00:17:37.856 "data_size": 65536 00:17:37.856 }, 00:17:37.856 { 00:17:37.856 "name": "BaseBdev2", 00:17:37.856 "uuid": "943522f9-5a3c-412d-98d9-269e3a4caed0", 00:17:37.856 "is_configured": true, 00:17:37.856 "data_offset": 0, 00:17:37.856 "data_size": 65536 00:17:37.856 }, 00:17:37.856 { 00:17:37.856 "name": "BaseBdev3", 00:17:37.856 "uuid": "461c89af-a81e-45f7-9f4f-9ab7991823d6", 00:17:37.856 "is_configured": true, 00:17:37.856 "data_offset": 0, 00:17:37.856 "data_size": 65536 00:17:37.856 }, 00:17:37.856 { 00:17:37.856 "name": "BaseBdev4", 00:17:37.856 "uuid": "01e7cf42-9cc9-476f-8bd1-f1c5be7ddb72", 00:17:37.856 "is_configured": true, 00:17:37.856 "data_offset": 0, 00:17:37.856 "data_size": 65536 00:17:37.856 } 00:17:37.856 ] 00:17:37.856 }' 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.856 09:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.423 [2024-10-15 09:17:22.187399] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:38.423 "name": "Existed_Raid", 00:17:38.423 "aliases": [ 00:17:38.423 "180d008b-fddd-4bc2-941c-aca2bcbe0b23" 00:17:38.423 ], 00:17:38.423 "product_name": "Raid Volume", 00:17:38.423 "block_size": 512, 00:17:38.423 "num_blocks": 65536, 00:17:38.423 "uuid": "180d008b-fddd-4bc2-941c-aca2bcbe0b23", 00:17:38.423 "assigned_rate_limits": { 00:17:38.423 "rw_ios_per_sec": 0, 00:17:38.423 "rw_mbytes_per_sec": 0, 00:17:38.423 "r_mbytes_per_sec": 0, 00:17:38.423 "w_mbytes_per_sec": 0 00:17:38.423 }, 00:17:38.423 "claimed": false, 00:17:38.423 "zoned": false, 00:17:38.423 "supported_io_types": { 00:17:38.423 "read": true, 00:17:38.423 "write": true, 00:17:38.423 "unmap": false, 00:17:38.423 "flush": false, 00:17:38.423 "reset": true, 00:17:38.423 "nvme_admin": false, 00:17:38.423 "nvme_io": false, 00:17:38.423 "nvme_io_md": false, 00:17:38.423 "write_zeroes": true, 00:17:38.423 "zcopy": false, 00:17:38.423 "get_zone_info": false, 00:17:38.423 "zone_management": false, 00:17:38.423 "zone_append": false, 00:17:38.423 "compare": false, 00:17:38.423 "compare_and_write": false, 00:17:38.423 "abort": false, 00:17:38.423 "seek_hole": false, 00:17:38.423 "seek_data": false, 00:17:38.423 "copy": false, 00:17:38.423 "nvme_iov_md": false 00:17:38.423 }, 00:17:38.423 "memory_domains": [ 00:17:38.423 { 00:17:38.423 "dma_device_id": "system", 00:17:38.423 "dma_device_type": 1 00:17:38.423 }, 00:17:38.423 { 00:17:38.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.423 "dma_device_type": 2 00:17:38.423 }, 00:17:38.423 { 00:17:38.423 "dma_device_id": "system", 00:17:38.423 "dma_device_type": 1 00:17:38.423 }, 00:17:38.423 { 00:17:38.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.423 "dma_device_type": 2 00:17:38.423 }, 00:17:38.423 { 00:17:38.423 "dma_device_id": "system", 00:17:38.423 "dma_device_type": 1 00:17:38.423 }, 00:17:38.423 { 00:17:38.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.423 "dma_device_type": 2 00:17:38.423 }, 00:17:38.423 { 00:17:38.423 "dma_device_id": "system", 00:17:38.423 "dma_device_type": 1 00:17:38.423 }, 00:17:38.423 { 00:17:38.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.423 "dma_device_type": 2 00:17:38.423 } 00:17:38.423 ], 00:17:38.423 "driver_specific": { 00:17:38.423 "raid": { 00:17:38.423 "uuid": "180d008b-fddd-4bc2-941c-aca2bcbe0b23", 00:17:38.423 "strip_size_kb": 0, 00:17:38.423 "state": "online", 00:17:38.423 "raid_level": "raid1", 00:17:38.423 "superblock": false, 00:17:38.423 "num_base_bdevs": 4, 00:17:38.423 "num_base_bdevs_discovered": 4, 00:17:38.423 "num_base_bdevs_operational": 4, 00:17:38.423 "base_bdevs_list": [ 00:17:38.423 { 00:17:38.423 "name": "BaseBdev1", 00:17:38.423 "uuid": "b95e8c06-558d-432e-bf18-17f4298d623e", 00:17:38.423 "is_configured": true, 00:17:38.423 "data_offset": 0, 00:17:38.423 "data_size": 65536 00:17:38.423 }, 00:17:38.423 { 00:17:38.423 "name": "BaseBdev2", 00:17:38.423 "uuid": "943522f9-5a3c-412d-98d9-269e3a4caed0", 00:17:38.423 "is_configured": true, 00:17:38.423 "data_offset": 0, 00:17:38.423 "data_size": 65536 00:17:38.423 }, 00:17:38.423 { 00:17:38.423 "name": "BaseBdev3", 00:17:38.423 "uuid": "461c89af-a81e-45f7-9f4f-9ab7991823d6", 00:17:38.423 "is_configured": true, 00:17:38.423 "data_offset": 0, 00:17:38.423 "data_size": 65536 00:17:38.423 }, 00:17:38.423 { 00:17:38.423 "name": "BaseBdev4", 00:17:38.423 "uuid": "01e7cf42-9cc9-476f-8bd1-f1c5be7ddb72", 00:17:38.423 "is_configured": true, 00:17:38.423 "data_offset": 0, 00:17:38.423 "data_size": 65536 00:17:38.423 } 00:17:38.423 ] 00:17:38.423 } 00:17:38.423 } 00:17:38.423 }' 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:38.423 BaseBdev2 00:17:38.423 BaseBdev3 00:17:38.423 BaseBdev4' 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.423 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:38.682 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.683 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:38.683 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.683 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.683 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.683 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.683 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:38.683 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:38.683 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:38.683 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.683 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.683 [2024-10-15 09:17:22.555164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:38.972 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.972 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:38.972 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:38.972 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:38.972 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:38.972 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:38.972 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:38.972 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.972 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.973 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.973 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.973 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:38.973 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.973 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.973 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.973 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.973 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.973 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.973 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.973 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.973 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.973 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.973 "name": "Existed_Raid", 00:17:38.973 "uuid": "180d008b-fddd-4bc2-941c-aca2bcbe0b23", 00:17:38.973 "strip_size_kb": 0, 00:17:38.973 "state": "online", 00:17:38.973 "raid_level": "raid1", 00:17:38.973 "superblock": false, 00:17:38.973 "num_base_bdevs": 4, 00:17:38.973 "num_base_bdevs_discovered": 3, 00:17:38.973 "num_base_bdevs_operational": 3, 00:17:38.973 "base_bdevs_list": [ 00:17:38.973 { 00:17:38.973 "name": null, 00:17:38.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.973 "is_configured": false, 00:17:38.973 "data_offset": 0, 00:17:38.973 "data_size": 65536 00:17:38.973 }, 00:17:38.973 { 00:17:38.973 "name": "BaseBdev2", 00:17:38.973 "uuid": "943522f9-5a3c-412d-98d9-269e3a4caed0", 00:17:38.973 "is_configured": true, 00:17:38.973 "data_offset": 0, 00:17:38.973 "data_size": 65536 00:17:38.973 }, 00:17:38.973 { 00:17:38.973 "name": "BaseBdev3", 00:17:38.973 "uuid": "461c89af-a81e-45f7-9f4f-9ab7991823d6", 00:17:38.973 "is_configured": true, 00:17:38.973 "data_offset": 0, 00:17:38.973 "data_size": 65536 00:17:38.973 }, 00:17:38.973 { 00:17:38.973 "name": "BaseBdev4", 00:17:38.973 "uuid": "01e7cf42-9cc9-476f-8bd1-f1c5be7ddb72", 00:17:38.973 "is_configured": true, 00:17:38.973 "data_offset": 0, 00:17:38.973 "data_size": 65536 00:17:38.973 } 00:17:38.973 ] 00:17:38.973 }' 00:17:38.973 09:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.973 09:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.232 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:39.232 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:39.232 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:39.232 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.232 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.232 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.232 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.491 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:39.491 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:39.492 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:39.492 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.492 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.492 [2024-10-15 09:17:23.192165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:39.492 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.492 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:39.492 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:39.492 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.492 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:39.492 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.492 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.492 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.492 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:39.492 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:39.492 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:39.492 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.492 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.492 [2024-10-15 09:17:23.353928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.751 [2024-10-15 09:17:23.500510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:39.751 [2024-10-15 09:17:23.500659] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.751 [2024-10-15 09:17:23.595792] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.751 [2024-10-15 09:17:23.596333] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.751 [2024-10-15 09:17:23.596573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.751 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.011 BaseBdev2 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.011 [ 00:17:40.011 { 00:17:40.011 "name": "BaseBdev2", 00:17:40.011 "aliases": [ 00:17:40.011 "6926a7f6-341a-44d0-81f9-69d8c44f4f23" 00:17:40.011 ], 00:17:40.011 "product_name": "Malloc disk", 00:17:40.011 "block_size": 512, 00:17:40.011 "num_blocks": 65536, 00:17:40.011 "uuid": "6926a7f6-341a-44d0-81f9-69d8c44f4f23", 00:17:40.011 "assigned_rate_limits": { 00:17:40.011 "rw_ios_per_sec": 0, 00:17:40.011 "rw_mbytes_per_sec": 0, 00:17:40.011 "r_mbytes_per_sec": 0, 00:17:40.011 "w_mbytes_per_sec": 0 00:17:40.011 }, 00:17:40.011 "claimed": false, 00:17:40.011 "zoned": false, 00:17:40.011 "supported_io_types": { 00:17:40.011 "read": true, 00:17:40.011 "write": true, 00:17:40.011 "unmap": true, 00:17:40.011 "flush": true, 00:17:40.011 "reset": true, 00:17:40.011 "nvme_admin": false, 00:17:40.011 "nvme_io": false, 00:17:40.011 "nvme_io_md": false, 00:17:40.011 "write_zeroes": true, 00:17:40.011 "zcopy": true, 00:17:40.011 "get_zone_info": false, 00:17:40.011 "zone_management": false, 00:17:40.011 "zone_append": false, 00:17:40.011 "compare": false, 00:17:40.011 "compare_and_write": false, 00:17:40.011 "abort": true, 00:17:40.011 "seek_hole": false, 00:17:40.011 "seek_data": false, 00:17:40.011 "copy": true, 00:17:40.011 "nvme_iov_md": false 00:17:40.011 }, 00:17:40.011 "memory_domains": [ 00:17:40.011 { 00:17:40.011 "dma_device_id": "system", 00:17:40.011 "dma_device_type": 1 00:17:40.011 }, 00:17:40.011 { 00:17:40.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.011 "dma_device_type": 2 00:17:40.011 } 00:17:40.011 ], 00:17:40.011 "driver_specific": {} 00:17:40.011 } 00:17:40.011 ] 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.011 BaseBdev3 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.011 [ 00:17:40.011 { 00:17:40.011 "name": "BaseBdev3", 00:17:40.011 "aliases": [ 00:17:40.011 "56c65252-fe9d-4e9e-8c48-d50224576503" 00:17:40.011 ], 00:17:40.011 "product_name": "Malloc disk", 00:17:40.011 "block_size": 512, 00:17:40.011 "num_blocks": 65536, 00:17:40.011 "uuid": "56c65252-fe9d-4e9e-8c48-d50224576503", 00:17:40.011 "assigned_rate_limits": { 00:17:40.011 "rw_ios_per_sec": 0, 00:17:40.011 "rw_mbytes_per_sec": 0, 00:17:40.011 "r_mbytes_per_sec": 0, 00:17:40.011 "w_mbytes_per_sec": 0 00:17:40.011 }, 00:17:40.011 "claimed": false, 00:17:40.011 "zoned": false, 00:17:40.011 "supported_io_types": { 00:17:40.011 "read": true, 00:17:40.011 "write": true, 00:17:40.011 "unmap": true, 00:17:40.011 "flush": true, 00:17:40.011 "reset": true, 00:17:40.011 "nvme_admin": false, 00:17:40.011 "nvme_io": false, 00:17:40.011 "nvme_io_md": false, 00:17:40.011 "write_zeroes": true, 00:17:40.011 "zcopy": true, 00:17:40.011 "get_zone_info": false, 00:17:40.011 "zone_management": false, 00:17:40.011 "zone_append": false, 00:17:40.011 "compare": false, 00:17:40.011 "compare_and_write": false, 00:17:40.011 "abort": true, 00:17:40.011 "seek_hole": false, 00:17:40.011 "seek_data": false, 00:17:40.011 "copy": true, 00:17:40.011 "nvme_iov_md": false 00:17:40.011 }, 00:17:40.011 "memory_domains": [ 00:17:40.011 { 00:17:40.011 "dma_device_id": "system", 00:17:40.011 "dma_device_type": 1 00:17:40.011 }, 00:17:40.011 { 00:17:40.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.011 "dma_device_type": 2 00:17:40.011 } 00:17:40.011 ], 00:17:40.011 "driver_specific": {} 00:17:40.011 } 00:17:40.011 ] 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.011 BaseBdev4 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.011 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.011 [ 00:17:40.011 { 00:17:40.011 "name": "BaseBdev4", 00:17:40.011 "aliases": [ 00:17:40.011 "05112935-ec83-4a70-bcbe-c3f0220bed08" 00:17:40.011 ], 00:17:40.011 "product_name": "Malloc disk", 00:17:40.011 "block_size": 512, 00:17:40.011 "num_blocks": 65536, 00:17:40.011 "uuid": "05112935-ec83-4a70-bcbe-c3f0220bed08", 00:17:40.011 "assigned_rate_limits": { 00:17:40.011 "rw_ios_per_sec": 0, 00:17:40.012 "rw_mbytes_per_sec": 0, 00:17:40.012 "r_mbytes_per_sec": 0, 00:17:40.012 "w_mbytes_per_sec": 0 00:17:40.012 }, 00:17:40.012 "claimed": false, 00:17:40.012 "zoned": false, 00:17:40.012 "supported_io_types": { 00:17:40.012 "read": true, 00:17:40.012 "write": true, 00:17:40.012 "unmap": true, 00:17:40.012 "flush": true, 00:17:40.012 "reset": true, 00:17:40.012 "nvme_admin": false, 00:17:40.012 "nvme_io": false, 00:17:40.012 "nvme_io_md": false, 00:17:40.012 "write_zeroes": true, 00:17:40.012 "zcopy": true, 00:17:40.012 "get_zone_info": false, 00:17:40.012 "zone_management": false, 00:17:40.012 "zone_append": false, 00:17:40.012 "compare": false, 00:17:40.012 "compare_and_write": false, 00:17:40.012 "abort": true, 00:17:40.012 "seek_hole": false, 00:17:40.012 "seek_data": false, 00:17:40.012 "copy": true, 00:17:40.012 "nvme_iov_md": false 00:17:40.012 }, 00:17:40.012 "memory_domains": [ 00:17:40.012 { 00:17:40.012 "dma_device_id": "system", 00:17:40.012 "dma_device_type": 1 00:17:40.012 }, 00:17:40.012 { 00:17:40.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.012 "dma_device_type": 2 00:17:40.012 } 00:17:40.012 ], 00:17:40.012 "driver_specific": {} 00:17:40.012 } 00:17:40.012 ] 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.012 [2024-10-15 09:17:23.909535] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:40.012 [2024-10-15 09:17:23.909739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:40.012 [2024-10-15 09:17:23.909921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:40.012 [2024-10-15 09:17:23.912706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:40.012 [2024-10-15 09:17:23.912899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.012 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.271 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.271 "name": "Existed_Raid", 00:17:40.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.271 "strip_size_kb": 0, 00:17:40.271 "state": "configuring", 00:17:40.271 "raid_level": "raid1", 00:17:40.271 "superblock": false, 00:17:40.271 "num_base_bdevs": 4, 00:17:40.271 "num_base_bdevs_discovered": 3, 00:17:40.271 "num_base_bdevs_operational": 4, 00:17:40.271 "base_bdevs_list": [ 00:17:40.271 { 00:17:40.271 "name": "BaseBdev1", 00:17:40.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.271 "is_configured": false, 00:17:40.271 "data_offset": 0, 00:17:40.271 "data_size": 0 00:17:40.271 }, 00:17:40.271 { 00:17:40.271 "name": "BaseBdev2", 00:17:40.271 "uuid": "6926a7f6-341a-44d0-81f9-69d8c44f4f23", 00:17:40.271 "is_configured": true, 00:17:40.271 "data_offset": 0, 00:17:40.271 "data_size": 65536 00:17:40.271 }, 00:17:40.271 { 00:17:40.271 "name": "BaseBdev3", 00:17:40.271 "uuid": "56c65252-fe9d-4e9e-8c48-d50224576503", 00:17:40.271 "is_configured": true, 00:17:40.271 "data_offset": 0, 00:17:40.271 "data_size": 65536 00:17:40.271 }, 00:17:40.271 { 00:17:40.271 "name": "BaseBdev4", 00:17:40.271 "uuid": "05112935-ec83-4a70-bcbe-c3f0220bed08", 00:17:40.271 "is_configured": true, 00:17:40.271 "data_offset": 0, 00:17:40.271 "data_size": 65536 00:17:40.271 } 00:17:40.271 ] 00:17:40.271 }' 00:17:40.271 09:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.271 09:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.591 09:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:40.591 09:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.591 09:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.591 [2024-10-15 09:17:24.417705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:40.591 09:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.591 09:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:40.591 09:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.591 09:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.591 09:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.591 09:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.591 09:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:40.591 09:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.591 09:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.591 09:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.591 09:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.591 09:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.591 09:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.591 09:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.591 09:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.592 09:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.592 09:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.592 "name": "Existed_Raid", 00:17:40.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.592 "strip_size_kb": 0, 00:17:40.592 "state": "configuring", 00:17:40.592 "raid_level": "raid1", 00:17:40.592 "superblock": false, 00:17:40.592 "num_base_bdevs": 4, 00:17:40.592 "num_base_bdevs_discovered": 2, 00:17:40.592 "num_base_bdevs_operational": 4, 00:17:40.592 "base_bdevs_list": [ 00:17:40.592 { 00:17:40.592 "name": "BaseBdev1", 00:17:40.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.592 "is_configured": false, 00:17:40.592 "data_offset": 0, 00:17:40.592 "data_size": 0 00:17:40.592 }, 00:17:40.592 { 00:17:40.592 "name": null, 00:17:40.592 "uuid": "6926a7f6-341a-44d0-81f9-69d8c44f4f23", 00:17:40.592 "is_configured": false, 00:17:40.592 "data_offset": 0, 00:17:40.592 "data_size": 65536 00:17:40.592 }, 00:17:40.592 { 00:17:40.592 "name": "BaseBdev3", 00:17:40.592 "uuid": "56c65252-fe9d-4e9e-8c48-d50224576503", 00:17:40.592 "is_configured": true, 00:17:40.592 "data_offset": 0, 00:17:40.592 "data_size": 65536 00:17:40.592 }, 00:17:40.592 { 00:17:40.592 "name": "BaseBdev4", 00:17:40.592 "uuid": "05112935-ec83-4a70-bcbe-c3f0220bed08", 00:17:40.592 "is_configured": true, 00:17:40.592 "data_offset": 0, 00:17:40.592 "data_size": 65536 00:17:40.592 } 00:17:40.592 ] 00:17:40.592 }' 00:17:40.592 09:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.592 09:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.159 09:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.159 09:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:41.159 09:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.159 09:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.159 09:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.159 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:41.159 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:41.159 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.159 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.159 [2024-10-15 09:17:25.058967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:41.159 BaseBdev1 00:17:41.159 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.159 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:41.159 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:41.159 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:41.159 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:41.159 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:41.159 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:41.159 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:41.159 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.159 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.159 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.159 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:41.159 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.159 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.159 [ 00:17:41.159 { 00:17:41.159 "name": "BaseBdev1", 00:17:41.159 "aliases": [ 00:17:41.159 "0a633205-d689-4b05-a311-7d97e1bb83c5" 00:17:41.159 ], 00:17:41.159 "product_name": "Malloc disk", 00:17:41.159 "block_size": 512, 00:17:41.159 "num_blocks": 65536, 00:17:41.159 "uuid": "0a633205-d689-4b05-a311-7d97e1bb83c5", 00:17:41.159 "assigned_rate_limits": { 00:17:41.159 "rw_ios_per_sec": 0, 00:17:41.159 "rw_mbytes_per_sec": 0, 00:17:41.159 "r_mbytes_per_sec": 0, 00:17:41.159 "w_mbytes_per_sec": 0 00:17:41.159 }, 00:17:41.159 "claimed": true, 00:17:41.159 "claim_type": "exclusive_write", 00:17:41.159 "zoned": false, 00:17:41.159 "supported_io_types": { 00:17:41.159 "read": true, 00:17:41.159 "write": true, 00:17:41.159 "unmap": true, 00:17:41.159 "flush": true, 00:17:41.159 "reset": true, 00:17:41.159 "nvme_admin": false, 00:17:41.159 "nvme_io": false, 00:17:41.159 "nvme_io_md": false, 00:17:41.159 "write_zeroes": true, 00:17:41.159 "zcopy": true, 00:17:41.159 "get_zone_info": false, 00:17:41.159 "zone_management": false, 00:17:41.159 "zone_append": false, 00:17:41.159 "compare": false, 00:17:41.159 "compare_and_write": false, 00:17:41.418 "abort": true, 00:17:41.418 "seek_hole": false, 00:17:41.418 "seek_data": false, 00:17:41.418 "copy": true, 00:17:41.418 "nvme_iov_md": false 00:17:41.418 }, 00:17:41.418 "memory_domains": [ 00:17:41.418 { 00:17:41.418 "dma_device_id": "system", 00:17:41.418 "dma_device_type": 1 00:17:41.418 }, 00:17:41.418 { 00:17:41.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.418 "dma_device_type": 2 00:17:41.418 } 00:17:41.418 ], 00:17:41.418 "driver_specific": {} 00:17:41.418 } 00:17:41.418 ] 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.418 "name": "Existed_Raid", 00:17:41.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.418 "strip_size_kb": 0, 00:17:41.418 "state": "configuring", 00:17:41.418 "raid_level": "raid1", 00:17:41.418 "superblock": false, 00:17:41.418 "num_base_bdevs": 4, 00:17:41.418 "num_base_bdevs_discovered": 3, 00:17:41.418 "num_base_bdevs_operational": 4, 00:17:41.418 "base_bdevs_list": [ 00:17:41.418 { 00:17:41.418 "name": "BaseBdev1", 00:17:41.418 "uuid": "0a633205-d689-4b05-a311-7d97e1bb83c5", 00:17:41.418 "is_configured": true, 00:17:41.418 "data_offset": 0, 00:17:41.418 "data_size": 65536 00:17:41.418 }, 00:17:41.418 { 00:17:41.418 "name": null, 00:17:41.418 "uuid": "6926a7f6-341a-44d0-81f9-69d8c44f4f23", 00:17:41.418 "is_configured": false, 00:17:41.418 "data_offset": 0, 00:17:41.418 "data_size": 65536 00:17:41.418 }, 00:17:41.418 { 00:17:41.418 "name": "BaseBdev3", 00:17:41.418 "uuid": "56c65252-fe9d-4e9e-8c48-d50224576503", 00:17:41.418 "is_configured": true, 00:17:41.418 "data_offset": 0, 00:17:41.418 "data_size": 65536 00:17:41.418 }, 00:17:41.418 { 00:17:41.418 "name": "BaseBdev4", 00:17:41.418 "uuid": "05112935-ec83-4a70-bcbe-c3f0220bed08", 00:17:41.418 "is_configured": true, 00:17:41.418 "data_offset": 0, 00:17:41.418 "data_size": 65536 00:17:41.418 } 00:17:41.418 ] 00:17:41.418 }' 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.418 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.986 [2024-10-15 09:17:25.711278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.986 "name": "Existed_Raid", 00:17:41.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.986 "strip_size_kb": 0, 00:17:41.986 "state": "configuring", 00:17:41.986 "raid_level": "raid1", 00:17:41.986 "superblock": false, 00:17:41.986 "num_base_bdevs": 4, 00:17:41.986 "num_base_bdevs_discovered": 2, 00:17:41.986 "num_base_bdevs_operational": 4, 00:17:41.986 "base_bdevs_list": [ 00:17:41.986 { 00:17:41.986 "name": "BaseBdev1", 00:17:41.986 "uuid": "0a633205-d689-4b05-a311-7d97e1bb83c5", 00:17:41.986 "is_configured": true, 00:17:41.986 "data_offset": 0, 00:17:41.986 "data_size": 65536 00:17:41.986 }, 00:17:41.986 { 00:17:41.986 "name": null, 00:17:41.986 "uuid": "6926a7f6-341a-44d0-81f9-69d8c44f4f23", 00:17:41.986 "is_configured": false, 00:17:41.986 "data_offset": 0, 00:17:41.986 "data_size": 65536 00:17:41.986 }, 00:17:41.986 { 00:17:41.986 "name": null, 00:17:41.986 "uuid": "56c65252-fe9d-4e9e-8c48-d50224576503", 00:17:41.986 "is_configured": false, 00:17:41.986 "data_offset": 0, 00:17:41.986 "data_size": 65536 00:17:41.986 }, 00:17:41.986 { 00:17:41.986 "name": "BaseBdev4", 00:17:41.986 "uuid": "05112935-ec83-4a70-bcbe-c3f0220bed08", 00:17:41.986 "is_configured": true, 00:17:41.986 "data_offset": 0, 00:17:41.986 "data_size": 65536 00:17:41.986 } 00:17:41.986 ] 00:17:41.986 }' 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.986 09:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.555 [2024-10-15 09:17:26.239408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.555 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.555 "name": "Existed_Raid", 00:17:42.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.555 "strip_size_kb": 0, 00:17:42.555 "state": "configuring", 00:17:42.555 "raid_level": "raid1", 00:17:42.555 "superblock": false, 00:17:42.555 "num_base_bdevs": 4, 00:17:42.555 "num_base_bdevs_discovered": 3, 00:17:42.555 "num_base_bdevs_operational": 4, 00:17:42.555 "base_bdevs_list": [ 00:17:42.555 { 00:17:42.555 "name": "BaseBdev1", 00:17:42.555 "uuid": "0a633205-d689-4b05-a311-7d97e1bb83c5", 00:17:42.555 "is_configured": true, 00:17:42.555 "data_offset": 0, 00:17:42.555 "data_size": 65536 00:17:42.555 }, 00:17:42.555 { 00:17:42.555 "name": null, 00:17:42.555 "uuid": "6926a7f6-341a-44d0-81f9-69d8c44f4f23", 00:17:42.555 "is_configured": false, 00:17:42.555 "data_offset": 0, 00:17:42.555 "data_size": 65536 00:17:42.555 }, 00:17:42.555 { 00:17:42.555 "name": "BaseBdev3", 00:17:42.555 "uuid": "56c65252-fe9d-4e9e-8c48-d50224576503", 00:17:42.555 "is_configured": true, 00:17:42.555 "data_offset": 0, 00:17:42.555 "data_size": 65536 00:17:42.555 }, 00:17:42.555 { 00:17:42.555 "name": "BaseBdev4", 00:17:42.555 "uuid": "05112935-ec83-4a70-bcbe-c3f0220bed08", 00:17:42.556 "is_configured": true, 00:17:42.556 "data_offset": 0, 00:17:42.556 "data_size": 65536 00:17:42.556 } 00:17:42.556 ] 00:17:42.556 }' 00:17:42.556 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.556 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.124 [2024-10-15 09:17:26.847643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.124 "name": "Existed_Raid", 00:17:43.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.124 "strip_size_kb": 0, 00:17:43.124 "state": "configuring", 00:17:43.124 "raid_level": "raid1", 00:17:43.124 "superblock": false, 00:17:43.124 "num_base_bdevs": 4, 00:17:43.124 "num_base_bdevs_discovered": 2, 00:17:43.124 "num_base_bdevs_operational": 4, 00:17:43.124 "base_bdevs_list": [ 00:17:43.124 { 00:17:43.124 "name": null, 00:17:43.124 "uuid": "0a633205-d689-4b05-a311-7d97e1bb83c5", 00:17:43.124 "is_configured": false, 00:17:43.124 "data_offset": 0, 00:17:43.124 "data_size": 65536 00:17:43.124 }, 00:17:43.124 { 00:17:43.124 "name": null, 00:17:43.124 "uuid": "6926a7f6-341a-44d0-81f9-69d8c44f4f23", 00:17:43.124 "is_configured": false, 00:17:43.124 "data_offset": 0, 00:17:43.124 "data_size": 65536 00:17:43.124 }, 00:17:43.124 { 00:17:43.124 "name": "BaseBdev3", 00:17:43.124 "uuid": "56c65252-fe9d-4e9e-8c48-d50224576503", 00:17:43.124 "is_configured": true, 00:17:43.124 "data_offset": 0, 00:17:43.124 "data_size": 65536 00:17:43.124 }, 00:17:43.124 { 00:17:43.124 "name": "BaseBdev4", 00:17:43.124 "uuid": "05112935-ec83-4a70-bcbe-c3f0220bed08", 00:17:43.124 "is_configured": true, 00:17:43.124 "data_offset": 0, 00:17:43.124 "data_size": 65536 00:17:43.124 } 00:17:43.124 ] 00:17:43.124 }' 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.124 09:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.692 [2024-10-15 09:17:27.505016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.692 "name": "Existed_Raid", 00:17:43.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.692 "strip_size_kb": 0, 00:17:43.692 "state": "configuring", 00:17:43.692 "raid_level": "raid1", 00:17:43.692 "superblock": false, 00:17:43.692 "num_base_bdevs": 4, 00:17:43.692 "num_base_bdevs_discovered": 3, 00:17:43.692 "num_base_bdevs_operational": 4, 00:17:43.692 "base_bdevs_list": [ 00:17:43.692 { 00:17:43.692 "name": null, 00:17:43.692 "uuid": "0a633205-d689-4b05-a311-7d97e1bb83c5", 00:17:43.692 "is_configured": false, 00:17:43.692 "data_offset": 0, 00:17:43.692 "data_size": 65536 00:17:43.692 }, 00:17:43.692 { 00:17:43.692 "name": "BaseBdev2", 00:17:43.692 "uuid": "6926a7f6-341a-44d0-81f9-69d8c44f4f23", 00:17:43.692 "is_configured": true, 00:17:43.692 "data_offset": 0, 00:17:43.692 "data_size": 65536 00:17:43.692 }, 00:17:43.692 { 00:17:43.692 "name": "BaseBdev3", 00:17:43.692 "uuid": "56c65252-fe9d-4e9e-8c48-d50224576503", 00:17:43.692 "is_configured": true, 00:17:43.692 "data_offset": 0, 00:17:43.692 "data_size": 65536 00:17:43.692 }, 00:17:43.692 { 00:17:43.692 "name": "BaseBdev4", 00:17:43.692 "uuid": "05112935-ec83-4a70-bcbe-c3f0220bed08", 00:17:43.692 "is_configured": true, 00:17:43.692 "data_offset": 0, 00:17:43.692 "data_size": 65536 00:17:43.692 } 00:17:43.692 ] 00:17:43.692 }' 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.692 09:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0a633205-d689-4b05-a311-7d97e1bb83c5 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.259 [2024-10-15 09:17:28.170510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:44.259 [2024-10-15 09:17:28.170594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:44.259 [2024-10-15 09:17:28.170610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:44.259 [2024-10-15 09:17:28.170971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:44.259 [2024-10-15 09:17:28.171222] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:44.259 [2024-10-15 09:17:28.171240] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:44.259 [2024-10-15 09:17:28.171572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.259 NewBaseBdev 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:44.259 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.260 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.260 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.260 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:44.260 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.260 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.518 [ 00:17:44.518 { 00:17:44.518 "name": "NewBaseBdev", 00:17:44.518 "aliases": [ 00:17:44.518 "0a633205-d689-4b05-a311-7d97e1bb83c5" 00:17:44.518 ], 00:17:44.518 "product_name": "Malloc disk", 00:17:44.518 "block_size": 512, 00:17:44.518 "num_blocks": 65536, 00:17:44.518 "uuid": "0a633205-d689-4b05-a311-7d97e1bb83c5", 00:17:44.518 "assigned_rate_limits": { 00:17:44.518 "rw_ios_per_sec": 0, 00:17:44.518 "rw_mbytes_per_sec": 0, 00:17:44.518 "r_mbytes_per_sec": 0, 00:17:44.518 "w_mbytes_per_sec": 0 00:17:44.518 }, 00:17:44.518 "claimed": true, 00:17:44.518 "claim_type": "exclusive_write", 00:17:44.518 "zoned": false, 00:17:44.518 "supported_io_types": { 00:17:44.518 "read": true, 00:17:44.518 "write": true, 00:17:44.518 "unmap": true, 00:17:44.518 "flush": true, 00:17:44.518 "reset": true, 00:17:44.518 "nvme_admin": false, 00:17:44.518 "nvme_io": false, 00:17:44.518 "nvme_io_md": false, 00:17:44.518 "write_zeroes": true, 00:17:44.518 "zcopy": true, 00:17:44.518 "get_zone_info": false, 00:17:44.518 "zone_management": false, 00:17:44.518 "zone_append": false, 00:17:44.518 "compare": false, 00:17:44.518 "compare_and_write": false, 00:17:44.518 "abort": true, 00:17:44.518 "seek_hole": false, 00:17:44.518 "seek_data": false, 00:17:44.518 "copy": true, 00:17:44.518 "nvme_iov_md": false 00:17:44.518 }, 00:17:44.518 "memory_domains": [ 00:17:44.518 { 00:17:44.518 "dma_device_id": "system", 00:17:44.518 "dma_device_type": 1 00:17:44.518 }, 00:17:44.518 { 00:17:44.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.518 "dma_device_type": 2 00:17:44.518 } 00:17:44.518 ], 00:17:44.518 "driver_specific": {} 00:17:44.518 } 00:17:44.518 ] 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.518 "name": "Existed_Raid", 00:17:44.518 "uuid": "f6852f8a-2066-4cfa-b6ca-c95ce8130e74", 00:17:44.518 "strip_size_kb": 0, 00:17:44.518 "state": "online", 00:17:44.518 "raid_level": "raid1", 00:17:44.518 "superblock": false, 00:17:44.518 "num_base_bdevs": 4, 00:17:44.518 "num_base_bdevs_discovered": 4, 00:17:44.518 "num_base_bdevs_operational": 4, 00:17:44.518 "base_bdevs_list": [ 00:17:44.518 { 00:17:44.518 "name": "NewBaseBdev", 00:17:44.518 "uuid": "0a633205-d689-4b05-a311-7d97e1bb83c5", 00:17:44.518 "is_configured": true, 00:17:44.518 "data_offset": 0, 00:17:44.518 "data_size": 65536 00:17:44.518 }, 00:17:44.518 { 00:17:44.518 "name": "BaseBdev2", 00:17:44.518 "uuid": "6926a7f6-341a-44d0-81f9-69d8c44f4f23", 00:17:44.518 "is_configured": true, 00:17:44.518 "data_offset": 0, 00:17:44.518 "data_size": 65536 00:17:44.518 }, 00:17:44.518 { 00:17:44.518 "name": "BaseBdev3", 00:17:44.518 "uuid": "56c65252-fe9d-4e9e-8c48-d50224576503", 00:17:44.518 "is_configured": true, 00:17:44.518 "data_offset": 0, 00:17:44.518 "data_size": 65536 00:17:44.518 }, 00:17:44.518 { 00:17:44.518 "name": "BaseBdev4", 00:17:44.518 "uuid": "05112935-ec83-4a70-bcbe-c3f0220bed08", 00:17:44.518 "is_configured": true, 00:17:44.518 "data_offset": 0, 00:17:44.518 "data_size": 65536 00:17:44.518 } 00:17:44.518 ] 00:17:44.518 }' 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.518 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.086 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:45.086 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:45.086 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:45.086 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:45.086 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:45.086 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:45.086 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:45.086 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.086 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.086 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:45.086 [2024-10-15 09:17:28.763177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:45.086 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.086 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:45.086 "name": "Existed_Raid", 00:17:45.086 "aliases": [ 00:17:45.086 "f6852f8a-2066-4cfa-b6ca-c95ce8130e74" 00:17:45.086 ], 00:17:45.086 "product_name": "Raid Volume", 00:17:45.086 "block_size": 512, 00:17:45.086 "num_blocks": 65536, 00:17:45.086 "uuid": "f6852f8a-2066-4cfa-b6ca-c95ce8130e74", 00:17:45.086 "assigned_rate_limits": { 00:17:45.086 "rw_ios_per_sec": 0, 00:17:45.086 "rw_mbytes_per_sec": 0, 00:17:45.086 "r_mbytes_per_sec": 0, 00:17:45.086 "w_mbytes_per_sec": 0 00:17:45.086 }, 00:17:45.086 "claimed": false, 00:17:45.086 "zoned": false, 00:17:45.086 "supported_io_types": { 00:17:45.086 "read": true, 00:17:45.086 "write": true, 00:17:45.086 "unmap": false, 00:17:45.086 "flush": false, 00:17:45.086 "reset": true, 00:17:45.086 "nvme_admin": false, 00:17:45.086 "nvme_io": false, 00:17:45.086 "nvme_io_md": false, 00:17:45.086 "write_zeroes": true, 00:17:45.086 "zcopy": false, 00:17:45.086 "get_zone_info": false, 00:17:45.086 "zone_management": false, 00:17:45.086 "zone_append": false, 00:17:45.086 "compare": false, 00:17:45.086 "compare_and_write": false, 00:17:45.086 "abort": false, 00:17:45.086 "seek_hole": false, 00:17:45.086 "seek_data": false, 00:17:45.086 "copy": false, 00:17:45.086 "nvme_iov_md": false 00:17:45.086 }, 00:17:45.086 "memory_domains": [ 00:17:45.086 { 00:17:45.086 "dma_device_id": "system", 00:17:45.086 "dma_device_type": 1 00:17:45.086 }, 00:17:45.086 { 00:17:45.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.086 "dma_device_type": 2 00:17:45.086 }, 00:17:45.086 { 00:17:45.086 "dma_device_id": "system", 00:17:45.086 "dma_device_type": 1 00:17:45.086 }, 00:17:45.086 { 00:17:45.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.086 "dma_device_type": 2 00:17:45.086 }, 00:17:45.086 { 00:17:45.086 "dma_device_id": "system", 00:17:45.086 "dma_device_type": 1 00:17:45.086 }, 00:17:45.086 { 00:17:45.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.086 "dma_device_type": 2 00:17:45.086 }, 00:17:45.086 { 00:17:45.086 "dma_device_id": "system", 00:17:45.086 "dma_device_type": 1 00:17:45.086 }, 00:17:45.086 { 00:17:45.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.086 "dma_device_type": 2 00:17:45.086 } 00:17:45.086 ], 00:17:45.086 "driver_specific": { 00:17:45.086 "raid": { 00:17:45.086 "uuid": "f6852f8a-2066-4cfa-b6ca-c95ce8130e74", 00:17:45.086 "strip_size_kb": 0, 00:17:45.086 "state": "online", 00:17:45.086 "raid_level": "raid1", 00:17:45.086 "superblock": false, 00:17:45.086 "num_base_bdevs": 4, 00:17:45.086 "num_base_bdevs_discovered": 4, 00:17:45.086 "num_base_bdevs_operational": 4, 00:17:45.086 "base_bdevs_list": [ 00:17:45.086 { 00:17:45.086 "name": "NewBaseBdev", 00:17:45.086 "uuid": "0a633205-d689-4b05-a311-7d97e1bb83c5", 00:17:45.086 "is_configured": true, 00:17:45.086 "data_offset": 0, 00:17:45.086 "data_size": 65536 00:17:45.086 }, 00:17:45.086 { 00:17:45.086 "name": "BaseBdev2", 00:17:45.086 "uuid": "6926a7f6-341a-44d0-81f9-69d8c44f4f23", 00:17:45.086 "is_configured": true, 00:17:45.086 "data_offset": 0, 00:17:45.086 "data_size": 65536 00:17:45.086 }, 00:17:45.086 { 00:17:45.086 "name": "BaseBdev3", 00:17:45.086 "uuid": "56c65252-fe9d-4e9e-8c48-d50224576503", 00:17:45.086 "is_configured": true, 00:17:45.086 "data_offset": 0, 00:17:45.086 "data_size": 65536 00:17:45.086 }, 00:17:45.086 { 00:17:45.086 "name": "BaseBdev4", 00:17:45.086 "uuid": "05112935-ec83-4a70-bcbe-c3f0220bed08", 00:17:45.086 "is_configured": true, 00:17:45.086 "data_offset": 0, 00:17:45.086 "data_size": 65536 00:17:45.086 } 00:17:45.086 ] 00:17:45.086 } 00:17:45.086 } 00:17:45.086 }' 00:17:45.086 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:45.086 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:45.086 BaseBdev2 00:17:45.086 BaseBdev3 00:17:45.086 BaseBdev4' 00:17:45.086 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.086 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:45.086 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.086 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:45.087 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.087 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.087 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.087 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.087 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:45.087 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:45.087 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.087 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:45.087 09:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.087 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.087 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.087 09:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.087 09:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:45.087 09:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:45.087 09:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.087 09:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:45.087 09:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.087 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.087 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.346 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.346 09:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:45.346 09:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:45.346 09:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.346 09:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:45.346 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.346 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.346 09:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.346 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.346 09:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:45.346 09:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:45.346 09:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:45.346 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.346 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.346 [2024-10-15 09:17:29.114852] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:45.346 [2024-10-15 09:17:29.114894] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:45.346 [2024-10-15 09:17:29.115020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.346 [2024-10-15 09:17:29.115448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.346 [2024-10-15 09:17:29.115473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:45.346 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.346 09:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73556 00:17:45.346 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73556 ']' 00:17:45.347 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73556 00:17:45.347 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:17:45.347 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:45.347 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73556 00:17:45.347 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:45.347 killing process with pid 73556 00:17:45.347 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:45.347 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73556' 00:17:45.347 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73556 00:17:45.347 [2024-10-15 09:17:29.153923] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:45.347 09:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73556 00:17:45.976 [2024-10-15 09:17:29.540371] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:46.922 00:17:46.922 real 0m13.160s 00:17:46.922 user 0m21.600s 00:17:46.922 sys 0m1.923s 00:17:46.922 ************************************ 00:17:46.922 END TEST raid_state_function_test 00:17:46.922 ************************************ 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.922 09:17:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:17:46.922 09:17:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:46.922 09:17:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:46.922 09:17:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:46.922 ************************************ 00:17:46.922 START TEST raid_state_function_test_sb 00:17:46.922 ************************************ 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:46.922 Process raid pid: 74244 00:17:46.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74244 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74244' 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74244 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74244 ']' 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:46.922 09:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.922 [2024-10-15 09:17:30.844654] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:17:46.922 [2024-10-15 09:17:30.845130] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.181 [2024-10-15 09:17:31.027417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.439 [2024-10-15 09:17:31.181533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.697 [2024-10-15 09:17:31.410444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.697 [2024-10-15 09:17:31.410512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.264 [2024-10-15 09:17:31.893006] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:48.264 [2024-10-15 09:17:31.893108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:48.264 [2024-10-15 09:17:31.893150] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:48.264 [2024-10-15 09:17:31.893176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:48.264 [2024-10-15 09:17:31.893193] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:48.264 [2024-10-15 09:17:31.893215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:48.264 [2024-10-15 09:17:31.893229] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:48.264 [2024-10-15 09:17:31.893250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.264 "name": "Existed_Raid", 00:17:48.264 "uuid": "7e0ac653-e502-40cc-ba19-8ed865700e71", 00:17:48.264 "strip_size_kb": 0, 00:17:48.264 "state": "configuring", 00:17:48.264 "raid_level": "raid1", 00:17:48.264 "superblock": true, 00:17:48.264 "num_base_bdevs": 4, 00:17:48.264 "num_base_bdevs_discovered": 0, 00:17:48.264 "num_base_bdevs_operational": 4, 00:17:48.264 "base_bdevs_list": [ 00:17:48.264 { 00:17:48.264 "name": "BaseBdev1", 00:17:48.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.264 "is_configured": false, 00:17:48.264 "data_offset": 0, 00:17:48.264 "data_size": 0 00:17:48.264 }, 00:17:48.264 { 00:17:48.264 "name": "BaseBdev2", 00:17:48.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.264 "is_configured": false, 00:17:48.264 "data_offset": 0, 00:17:48.264 "data_size": 0 00:17:48.264 }, 00:17:48.264 { 00:17:48.264 "name": "BaseBdev3", 00:17:48.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.264 "is_configured": false, 00:17:48.264 "data_offset": 0, 00:17:48.264 "data_size": 0 00:17:48.264 }, 00:17:48.264 { 00:17:48.264 "name": "BaseBdev4", 00:17:48.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.264 "is_configured": false, 00:17:48.264 "data_offset": 0, 00:17:48.264 "data_size": 0 00:17:48.264 } 00:17:48.264 ] 00:17:48.264 }' 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.264 09:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.522 09:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:48.522 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.522 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.522 [2024-10-15 09:17:32.425005] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:48.522 [2024-10-15 09:17:32.425066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:48.522 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.522 09:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:48.522 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.522 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.522 [2024-10-15 09:17:32.437100] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:48.522 [2024-10-15 09:17:32.437367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:48.522 [2024-10-15 09:17:32.437533] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:48.522 [2024-10-15 09:17:32.437733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:48.522 [2024-10-15 09:17:32.437890] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:48.522 [2024-10-15 09:17:32.437969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:48.522 [2024-10-15 09:17:32.438220] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:48.522 [2024-10-15 09:17:32.438300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:48.522 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.522 09:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:48.522 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.522 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.812 [2024-10-15 09:17:32.490498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:48.812 BaseBdev1 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.812 [ 00:17:48.812 { 00:17:48.812 "name": "BaseBdev1", 00:17:48.812 "aliases": [ 00:17:48.812 "0c84096c-5477-4df2-8772-2d41246498c7" 00:17:48.812 ], 00:17:48.812 "product_name": "Malloc disk", 00:17:48.812 "block_size": 512, 00:17:48.812 "num_blocks": 65536, 00:17:48.812 "uuid": "0c84096c-5477-4df2-8772-2d41246498c7", 00:17:48.812 "assigned_rate_limits": { 00:17:48.812 "rw_ios_per_sec": 0, 00:17:48.812 "rw_mbytes_per_sec": 0, 00:17:48.812 "r_mbytes_per_sec": 0, 00:17:48.812 "w_mbytes_per_sec": 0 00:17:48.812 }, 00:17:48.812 "claimed": true, 00:17:48.812 "claim_type": "exclusive_write", 00:17:48.812 "zoned": false, 00:17:48.812 "supported_io_types": { 00:17:48.812 "read": true, 00:17:48.812 "write": true, 00:17:48.812 "unmap": true, 00:17:48.812 "flush": true, 00:17:48.812 "reset": true, 00:17:48.812 "nvme_admin": false, 00:17:48.812 "nvme_io": false, 00:17:48.812 "nvme_io_md": false, 00:17:48.812 "write_zeroes": true, 00:17:48.812 "zcopy": true, 00:17:48.812 "get_zone_info": false, 00:17:48.812 "zone_management": false, 00:17:48.812 "zone_append": false, 00:17:48.812 "compare": false, 00:17:48.812 "compare_and_write": false, 00:17:48.812 "abort": true, 00:17:48.812 "seek_hole": false, 00:17:48.812 "seek_data": false, 00:17:48.812 "copy": true, 00:17:48.812 "nvme_iov_md": false 00:17:48.812 }, 00:17:48.812 "memory_domains": [ 00:17:48.812 { 00:17:48.812 "dma_device_id": "system", 00:17:48.812 "dma_device_type": 1 00:17:48.812 }, 00:17:48.812 { 00:17:48.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.812 "dma_device_type": 2 00:17:48.812 } 00:17:48.812 ], 00:17:48.812 "driver_specific": {} 00:17:48.812 } 00:17:48.812 ] 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.812 09:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.813 09:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.813 09:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.813 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.813 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.813 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.813 09:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.813 "name": "Existed_Raid", 00:17:48.813 "uuid": "8a480385-e2fc-4bb0-98bc-a78e6690a62a", 00:17:48.813 "strip_size_kb": 0, 00:17:48.813 "state": "configuring", 00:17:48.813 "raid_level": "raid1", 00:17:48.813 "superblock": true, 00:17:48.813 "num_base_bdevs": 4, 00:17:48.813 "num_base_bdevs_discovered": 1, 00:17:48.813 "num_base_bdevs_operational": 4, 00:17:48.813 "base_bdevs_list": [ 00:17:48.813 { 00:17:48.813 "name": "BaseBdev1", 00:17:48.813 "uuid": "0c84096c-5477-4df2-8772-2d41246498c7", 00:17:48.813 "is_configured": true, 00:17:48.813 "data_offset": 2048, 00:17:48.813 "data_size": 63488 00:17:48.813 }, 00:17:48.813 { 00:17:48.813 "name": "BaseBdev2", 00:17:48.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.813 "is_configured": false, 00:17:48.813 "data_offset": 0, 00:17:48.813 "data_size": 0 00:17:48.813 }, 00:17:48.813 { 00:17:48.813 "name": "BaseBdev3", 00:17:48.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.813 "is_configured": false, 00:17:48.813 "data_offset": 0, 00:17:48.813 "data_size": 0 00:17:48.813 }, 00:17:48.813 { 00:17:48.813 "name": "BaseBdev4", 00:17:48.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.813 "is_configured": false, 00:17:48.813 "data_offset": 0, 00:17:48.813 "data_size": 0 00:17:48.813 } 00:17:48.813 ] 00:17:48.813 }' 00:17:48.813 09:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.813 09:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.391 [2024-10-15 09:17:33.042715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:49.391 [2024-10-15 09:17:33.042795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.391 [2024-10-15 09:17:33.054815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.391 [2024-10-15 09:17:33.057547] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:49.391 [2024-10-15 09:17:33.057722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:49.391 [2024-10-15 09:17:33.057842] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:49.391 [2024-10-15 09:17:33.057905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:49.391 [2024-10-15 09:17:33.058035] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:49.391 [2024-10-15 09:17:33.058067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.391 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.392 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.392 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.392 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.392 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.392 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.392 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.392 "name": "Existed_Raid", 00:17:49.392 "uuid": "d9b74009-b5be-4ef4-b393-d17555d18618", 00:17:49.392 "strip_size_kb": 0, 00:17:49.392 "state": "configuring", 00:17:49.392 "raid_level": "raid1", 00:17:49.392 "superblock": true, 00:17:49.392 "num_base_bdevs": 4, 00:17:49.392 "num_base_bdevs_discovered": 1, 00:17:49.392 "num_base_bdevs_operational": 4, 00:17:49.392 "base_bdevs_list": [ 00:17:49.392 { 00:17:49.392 "name": "BaseBdev1", 00:17:49.392 "uuid": "0c84096c-5477-4df2-8772-2d41246498c7", 00:17:49.392 "is_configured": true, 00:17:49.392 "data_offset": 2048, 00:17:49.392 "data_size": 63488 00:17:49.392 }, 00:17:49.392 { 00:17:49.392 "name": "BaseBdev2", 00:17:49.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.392 "is_configured": false, 00:17:49.392 "data_offset": 0, 00:17:49.392 "data_size": 0 00:17:49.392 }, 00:17:49.392 { 00:17:49.392 "name": "BaseBdev3", 00:17:49.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.392 "is_configured": false, 00:17:49.392 "data_offset": 0, 00:17:49.392 "data_size": 0 00:17:49.392 }, 00:17:49.392 { 00:17:49.392 "name": "BaseBdev4", 00:17:49.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.392 "is_configured": false, 00:17:49.392 "data_offset": 0, 00:17:49.392 "data_size": 0 00:17:49.392 } 00:17:49.392 ] 00:17:49.392 }' 00:17:49.392 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.392 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.957 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:49.957 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.957 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.957 [2024-10-15 09:17:33.620877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:49.957 BaseBdev2 00:17:49.957 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.957 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:49.957 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.958 [ 00:17:49.958 { 00:17:49.958 "name": "BaseBdev2", 00:17:49.958 "aliases": [ 00:17:49.958 "71ec17b4-e1f2-40e3-ae88-4cb0973965af" 00:17:49.958 ], 00:17:49.958 "product_name": "Malloc disk", 00:17:49.958 "block_size": 512, 00:17:49.958 "num_blocks": 65536, 00:17:49.958 "uuid": "71ec17b4-e1f2-40e3-ae88-4cb0973965af", 00:17:49.958 "assigned_rate_limits": { 00:17:49.958 "rw_ios_per_sec": 0, 00:17:49.958 "rw_mbytes_per_sec": 0, 00:17:49.958 "r_mbytes_per_sec": 0, 00:17:49.958 "w_mbytes_per_sec": 0 00:17:49.958 }, 00:17:49.958 "claimed": true, 00:17:49.958 "claim_type": "exclusive_write", 00:17:49.958 "zoned": false, 00:17:49.958 "supported_io_types": { 00:17:49.958 "read": true, 00:17:49.958 "write": true, 00:17:49.958 "unmap": true, 00:17:49.958 "flush": true, 00:17:49.958 "reset": true, 00:17:49.958 "nvme_admin": false, 00:17:49.958 "nvme_io": false, 00:17:49.958 "nvme_io_md": false, 00:17:49.958 "write_zeroes": true, 00:17:49.958 "zcopy": true, 00:17:49.958 "get_zone_info": false, 00:17:49.958 "zone_management": false, 00:17:49.958 "zone_append": false, 00:17:49.958 "compare": false, 00:17:49.958 "compare_and_write": false, 00:17:49.958 "abort": true, 00:17:49.958 "seek_hole": false, 00:17:49.958 "seek_data": false, 00:17:49.958 "copy": true, 00:17:49.958 "nvme_iov_md": false 00:17:49.958 }, 00:17:49.958 "memory_domains": [ 00:17:49.958 { 00:17:49.958 "dma_device_id": "system", 00:17:49.958 "dma_device_type": 1 00:17:49.958 }, 00:17:49.958 { 00:17:49.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.958 "dma_device_type": 2 00:17:49.958 } 00:17:49.958 ], 00:17:49.958 "driver_specific": {} 00:17:49.958 } 00:17:49.958 ] 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.958 "name": "Existed_Raid", 00:17:49.958 "uuid": "d9b74009-b5be-4ef4-b393-d17555d18618", 00:17:49.958 "strip_size_kb": 0, 00:17:49.958 "state": "configuring", 00:17:49.958 "raid_level": "raid1", 00:17:49.958 "superblock": true, 00:17:49.958 "num_base_bdevs": 4, 00:17:49.958 "num_base_bdevs_discovered": 2, 00:17:49.958 "num_base_bdevs_operational": 4, 00:17:49.958 "base_bdevs_list": [ 00:17:49.958 { 00:17:49.958 "name": "BaseBdev1", 00:17:49.958 "uuid": "0c84096c-5477-4df2-8772-2d41246498c7", 00:17:49.958 "is_configured": true, 00:17:49.958 "data_offset": 2048, 00:17:49.958 "data_size": 63488 00:17:49.958 }, 00:17:49.958 { 00:17:49.958 "name": "BaseBdev2", 00:17:49.958 "uuid": "71ec17b4-e1f2-40e3-ae88-4cb0973965af", 00:17:49.958 "is_configured": true, 00:17:49.958 "data_offset": 2048, 00:17:49.958 "data_size": 63488 00:17:49.958 }, 00:17:49.958 { 00:17:49.958 "name": "BaseBdev3", 00:17:49.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.958 "is_configured": false, 00:17:49.958 "data_offset": 0, 00:17:49.958 "data_size": 0 00:17:49.958 }, 00:17:49.958 { 00:17:49.958 "name": "BaseBdev4", 00:17:49.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.958 "is_configured": false, 00:17:49.958 "data_offset": 0, 00:17:49.958 "data_size": 0 00:17:49.958 } 00:17:49.958 ] 00:17:49.958 }' 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.958 09:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.524 [2024-10-15 09:17:34.261167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:50.524 BaseBdev3 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.524 [ 00:17:50.524 { 00:17:50.524 "name": "BaseBdev3", 00:17:50.524 "aliases": [ 00:17:50.524 "c3b92389-6c93-460f-944a-6f40e8b02bf4" 00:17:50.524 ], 00:17:50.524 "product_name": "Malloc disk", 00:17:50.524 "block_size": 512, 00:17:50.524 "num_blocks": 65536, 00:17:50.524 "uuid": "c3b92389-6c93-460f-944a-6f40e8b02bf4", 00:17:50.524 "assigned_rate_limits": { 00:17:50.524 "rw_ios_per_sec": 0, 00:17:50.524 "rw_mbytes_per_sec": 0, 00:17:50.524 "r_mbytes_per_sec": 0, 00:17:50.524 "w_mbytes_per_sec": 0 00:17:50.524 }, 00:17:50.524 "claimed": true, 00:17:50.524 "claim_type": "exclusive_write", 00:17:50.524 "zoned": false, 00:17:50.524 "supported_io_types": { 00:17:50.524 "read": true, 00:17:50.524 "write": true, 00:17:50.524 "unmap": true, 00:17:50.524 "flush": true, 00:17:50.524 "reset": true, 00:17:50.524 "nvme_admin": false, 00:17:50.524 "nvme_io": false, 00:17:50.524 "nvme_io_md": false, 00:17:50.524 "write_zeroes": true, 00:17:50.524 "zcopy": true, 00:17:50.524 "get_zone_info": false, 00:17:50.524 "zone_management": false, 00:17:50.524 "zone_append": false, 00:17:50.524 "compare": false, 00:17:50.524 "compare_and_write": false, 00:17:50.524 "abort": true, 00:17:50.524 "seek_hole": false, 00:17:50.524 "seek_data": false, 00:17:50.524 "copy": true, 00:17:50.524 "nvme_iov_md": false 00:17:50.524 }, 00:17:50.524 "memory_domains": [ 00:17:50.524 { 00:17:50.524 "dma_device_id": "system", 00:17:50.524 "dma_device_type": 1 00:17:50.524 }, 00:17:50.524 { 00:17:50.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.524 "dma_device_type": 2 00:17:50.524 } 00:17:50.524 ], 00:17:50.524 "driver_specific": {} 00:17:50.524 } 00:17:50.524 ] 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.524 "name": "Existed_Raid", 00:17:50.524 "uuid": "d9b74009-b5be-4ef4-b393-d17555d18618", 00:17:50.524 "strip_size_kb": 0, 00:17:50.524 "state": "configuring", 00:17:50.524 "raid_level": "raid1", 00:17:50.524 "superblock": true, 00:17:50.524 "num_base_bdevs": 4, 00:17:50.524 "num_base_bdevs_discovered": 3, 00:17:50.524 "num_base_bdevs_operational": 4, 00:17:50.524 "base_bdevs_list": [ 00:17:50.524 { 00:17:50.524 "name": "BaseBdev1", 00:17:50.524 "uuid": "0c84096c-5477-4df2-8772-2d41246498c7", 00:17:50.524 "is_configured": true, 00:17:50.524 "data_offset": 2048, 00:17:50.524 "data_size": 63488 00:17:50.524 }, 00:17:50.524 { 00:17:50.524 "name": "BaseBdev2", 00:17:50.524 "uuid": "71ec17b4-e1f2-40e3-ae88-4cb0973965af", 00:17:50.524 "is_configured": true, 00:17:50.524 "data_offset": 2048, 00:17:50.524 "data_size": 63488 00:17:50.524 }, 00:17:50.524 { 00:17:50.524 "name": "BaseBdev3", 00:17:50.524 "uuid": "c3b92389-6c93-460f-944a-6f40e8b02bf4", 00:17:50.524 "is_configured": true, 00:17:50.524 "data_offset": 2048, 00:17:50.524 "data_size": 63488 00:17:50.524 }, 00:17:50.524 { 00:17:50.524 "name": "BaseBdev4", 00:17:50.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.524 "is_configured": false, 00:17:50.524 "data_offset": 0, 00:17:50.524 "data_size": 0 00:17:50.524 } 00:17:50.524 ] 00:17:50.524 }' 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.524 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.093 [2024-10-15 09:17:34.856949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:51.093 [2024-10-15 09:17:34.857567] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:51.093 [2024-10-15 09:17:34.857594] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:51.093 BaseBdev4 00:17:51.093 [2024-10-15 09:17:34.857973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:51.093 [2024-10-15 09:17:34.858228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:51.093 [2024-10-15 09:17:34.858262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:51.093 [2024-10-15 09:17:34.858454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.093 [ 00:17:51.093 { 00:17:51.093 "name": "BaseBdev4", 00:17:51.093 "aliases": [ 00:17:51.093 "6e794d13-f79b-4987-b5ee-c9a125f365a1" 00:17:51.093 ], 00:17:51.093 "product_name": "Malloc disk", 00:17:51.093 "block_size": 512, 00:17:51.093 "num_blocks": 65536, 00:17:51.093 "uuid": "6e794d13-f79b-4987-b5ee-c9a125f365a1", 00:17:51.093 "assigned_rate_limits": { 00:17:51.093 "rw_ios_per_sec": 0, 00:17:51.093 "rw_mbytes_per_sec": 0, 00:17:51.093 "r_mbytes_per_sec": 0, 00:17:51.093 "w_mbytes_per_sec": 0 00:17:51.093 }, 00:17:51.093 "claimed": true, 00:17:51.093 "claim_type": "exclusive_write", 00:17:51.093 "zoned": false, 00:17:51.093 "supported_io_types": { 00:17:51.093 "read": true, 00:17:51.093 "write": true, 00:17:51.093 "unmap": true, 00:17:51.093 "flush": true, 00:17:51.093 "reset": true, 00:17:51.093 "nvme_admin": false, 00:17:51.093 "nvme_io": false, 00:17:51.093 "nvme_io_md": false, 00:17:51.093 "write_zeroes": true, 00:17:51.093 "zcopy": true, 00:17:51.093 "get_zone_info": false, 00:17:51.093 "zone_management": false, 00:17:51.093 "zone_append": false, 00:17:51.093 "compare": false, 00:17:51.093 "compare_and_write": false, 00:17:51.093 "abort": true, 00:17:51.093 "seek_hole": false, 00:17:51.093 "seek_data": false, 00:17:51.093 "copy": true, 00:17:51.093 "nvme_iov_md": false 00:17:51.093 }, 00:17:51.093 "memory_domains": [ 00:17:51.093 { 00:17:51.093 "dma_device_id": "system", 00:17:51.093 "dma_device_type": 1 00:17:51.093 }, 00:17:51.093 { 00:17:51.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.093 "dma_device_type": 2 00:17:51.093 } 00:17:51.093 ], 00:17:51.093 "driver_specific": {} 00:17:51.093 } 00:17:51.093 ] 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.093 "name": "Existed_Raid", 00:17:51.093 "uuid": "d9b74009-b5be-4ef4-b393-d17555d18618", 00:17:51.093 "strip_size_kb": 0, 00:17:51.093 "state": "online", 00:17:51.093 "raid_level": "raid1", 00:17:51.093 "superblock": true, 00:17:51.093 "num_base_bdevs": 4, 00:17:51.093 "num_base_bdevs_discovered": 4, 00:17:51.093 "num_base_bdevs_operational": 4, 00:17:51.093 "base_bdevs_list": [ 00:17:51.093 { 00:17:51.093 "name": "BaseBdev1", 00:17:51.093 "uuid": "0c84096c-5477-4df2-8772-2d41246498c7", 00:17:51.093 "is_configured": true, 00:17:51.093 "data_offset": 2048, 00:17:51.093 "data_size": 63488 00:17:51.093 }, 00:17:51.093 { 00:17:51.093 "name": "BaseBdev2", 00:17:51.093 "uuid": "71ec17b4-e1f2-40e3-ae88-4cb0973965af", 00:17:51.093 "is_configured": true, 00:17:51.093 "data_offset": 2048, 00:17:51.093 "data_size": 63488 00:17:51.093 }, 00:17:51.093 { 00:17:51.093 "name": "BaseBdev3", 00:17:51.093 "uuid": "c3b92389-6c93-460f-944a-6f40e8b02bf4", 00:17:51.093 "is_configured": true, 00:17:51.093 "data_offset": 2048, 00:17:51.093 "data_size": 63488 00:17:51.093 }, 00:17:51.093 { 00:17:51.093 "name": "BaseBdev4", 00:17:51.093 "uuid": "6e794d13-f79b-4987-b5ee-c9a125f365a1", 00:17:51.093 "is_configured": true, 00:17:51.093 "data_offset": 2048, 00:17:51.093 "data_size": 63488 00:17:51.093 } 00:17:51.093 ] 00:17:51.093 }' 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.093 09:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.661 [2024-10-15 09:17:35.417647] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:51.661 "name": "Existed_Raid", 00:17:51.661 "aliases": [ 00:17:51.661 "d9b74009-b5be-4ef4-b393-d17555d18618" 00:17:51.661 ], 00:17:51.661 "product_name": "Raid Volume", 00:17:51.661 "block_size": 512, 00:17:51.661 "num_blocks": 63488, 00:17:51.661 "uuid": "d9b74009-b5be-4ef4-b393-d17555d18618", 00:17:51.661 "assigned_rate_limits": { 00:17:51.661 "rw_ios_per_sec": 0, 00:17:51.661 "rw_mbytes_per_sec": 0, 00:17:51.661 "r_mbytes_per_sec": 0, 00:17:51.661 "w_mbytes_per_sec": 0 00:17:51.661 }, 00:17:51.661 "claimed": false, 00:17:51.661 "zoned": false, 00:17:51.661 "supported_io_types": { 00:17:51.661 "read": true, 00:17:51.661 "write": true, 00:17:51.661 "unmap": false, 00:17:51.661 "flush": false, 00:17:51.661 "reset": true, 00:17:51.661 "nvme_admin": false, 00:17:51.661 "nvme_io": false, 00:17:51.661 "nvme_io_md": false, 00:17:51.661 "write_zeroes": true, 00:17:51.661 "zcopy": false, 00:17:51.661 "get_zone_info": false, 00:17:51.661 "zone_management": false, 00:17:51.661 "zone_append": false, 00:17:51.661 "compare": false, 00:17:51.661 "compare_and_write": false, 00:17:51.661 "abort": false, 00:17:51.661 "seek_hole": false, 00:17:51.661 "seek_data": false, 00:17:51.661 "copy": false, 00:17:51.661 "nvme_iov_md": false 00:17:51.661 }, 00:17:51.661 "memory_domains": [ 00:17:51.661 { 00:17:51.661 "dma_device_id": "system", 00:17:51.661 "dma_device_type": 1 00:17:51.661 }, 00:17:51.661 { 00:17:51.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.661 "dma_device_type": 2 00:17:51.661 }, 00:17:51.661 { 00:17:51.661 "dma_device_id": "system", 00:17:51.661 "dma_device_type": 1 00:17:51.661 }, 00:17:51.661 { 00:17:51.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.661 "dma_device_type": 2 00:17:51.661 }, 00:17:51.661 { 00:17:51.661 "dma_device_id": "system", 00:17:51.661 "dma_device_type": 1 00:17:51.661 }, 00:17:51.661 { 00:17:51.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.661 "dma_device_type": 2 00:17:51.661 }, 00:17:51.661 { 00:17:51.661 "dma_device_id": "system", 00:17:51.661 "dma_device_type": 1 00:17:51.661 }, 00:17:51.661 { 00:17:51.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.661 "dma_device_type": 2 00:17:51.661 } 00:17:51.661 ], 00:17:51.661 "driver_specific": { 00:17:51.661 "raid": { 00:17:51.661 "uuid": "d9b74009-b5be-4ef4-b393-d17555d18618", 00:17:51.661 "strip_size_kb": 0, 00:17:51.661 "state": "online", 00:17:51.661 "raid_level": "raid1", 00:17:51.661 "superblock": true, 00:17:51.661 "num_base_bdevs": 4, 00:17:51.661 "num_base_bdevs_discovered": 4, 00:17:51.661 "num_base_bdevs_operational": 4, 00:17:51.661 "base_bdevs_list": [ 00:17:51.661 { 00:17:51.661 "name": "BaseBdev1", 00:17:51.661 "uuid": "0c84096c-5477-4df2-8772-2d41246498c7", 00:17:51.661 "is_configured": true, 00:17:51.661 "data_offset": 2048, 00:17:51.661 "data_size": 63488 00:17:51.661 }, 00:17:51.661 { 00:17:51.661 "name": "BaseBdev2", 00:17:51.661 "uuid": "71ec17b4-e1f2-40e3-ae88-4cb0973965af", 00:17:51.661 "is_configured": true, 00:17:51.661 "data_offset": 2048, 00:17:51.661 "data_size": 63488 00:17:51.661 }, 00:17:51.661 { 00:17:51.661 "name": "BaseBdev3", 00:17:51.661 "uuid": "c3b92389-6c93-460f-944a-6f40e8b02bf4", 00:17:51.661 "is_configured": true, 00:17:51.661 "data_offset": 2048, 00:17:51.661 "data_size": 63488 00:17:51.661 }, 00:17:51.661 { 00:17:51.661 "name": "BaseBdev4", 00:17:51.661 "uuid": "6e794d13-f79b-4987-b5ee-c9a125f365a1", 00:17:51.661 "is_configured": true, 00:17:51.661 "data_offset": 2048, 00:17:51.661 "data_size": 63488 00:17:51.661 } 00:17:51.661 ] 00:17:51.661 } 00:17:51.661 } 00:17:51.661 }' 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:51.661 BaseBdev2 00:17:51.661 BaseBdev3 00:17:51.661 BaseBdev4' 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.661 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.921 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.921 [2024-10-15 09:17:35.813403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:52.181 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.181 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:52.181 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:52.181 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:52.181 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:52.181 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:52.181 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:52.181 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.181 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.181 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.181 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.181 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:52.181 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.181 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.181 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.181 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.181 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.181 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.182 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.182 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.182 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.182 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.182 "name": "Existed_Raid", 00:17:52.182 "uuid": "d9b74009-b5be-4ef4-b393-d17555d18618", 00:17:52.182 "strip_size_kb": 0, 00:17:52.182 "state": "online", 00:17:52.182 "raid_level": "raid1", 00:17:52.182 "superblock": true, 00:17:52.182 "num_base_bdevs": 4, 00:17:52.182 "num_base_bdevs_discovered": 3, 00:17:52.182 "num_base_bdevs_operational": 3, 00:17:52.182 "base_bdevs_list": [ 00:17:52.182 { 00:17:52.182 "name": null, 00:17:52.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.182 "is_configured": false, 00:17:52.182 "data_offset": 0, 00:17:52.182 "data_size": 63488 00:17:52.182 }, 00:17:52.182 { 00:17:52.182 "name": "BaseBdev2", 00:17:52.182 "uuid": "71ec17b4-e1f2-40e3-ae88-4cb0973965af", 00:17:52.182 "is_configured": true, 00:17:52.182 "data_offset": 2048, 00:17:52.182 "data_size": 63488 00:17:52.182 }, 00:17:52.182 { 00:17:52.182 "name": "BaseBdev3", 00:17:52.182 "uuid": "c3b92389-6c93-460f-944a-6f40e8b02bf4", 00:17:52.182 "is_configured": true, 00:17:52.182 "data_offset": 2048, 00:17:52.182 "data_size": 63488 00:17:52.182 }, 00:17:52.182 { 00:17:52.182 "name": "BaseBdev4", 00:17:52.182 "uuid": "6e794d13-f79b-4987-b5ee-c9a125f365a1", 00:17:52.182 "is_configured": true, 00:17:52.182 "data_offset": 2048, 00:17:52.182 "data_size": 63488 00:17:52.182 } 00:17:52.182 ] 00:17:52.182 }' 00:17:52.182 09:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.182 09:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.749 [2024-10-15 09:17:36.485446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.749 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.749 [2024-10-15 09:17:36.639578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.008 [2024-10-15 09:17:36.793342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:53.008 [2024-10-15 09:17:36.793526] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.008 [2024-10-15 09:17:36.888910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.008 [2024-10-15 09:17:36.889321] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.008 [2024-10-15 09:17:36.889485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.008 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.267 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:53.267 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:53.267 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:53.267 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:53.267 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:53.268 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:53.268 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.268 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.268 BaseBdev2 00:17:53.268 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.268 09:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:53.268 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:53.268 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:53.268 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:53.268 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:53.268 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:53.268 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:53.268 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.268 09:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.268 [ 00:17:53.268 { 00:17:53.268 "name": "BaseBdev2", 00:17:53.268 "aliases": [ 00:17:53.268 "3a36753d-4d26-4dc9-97e4-0550d1dbea76" 00:17:53.268 ], 00:17:53.268 "product_name": "Malloc disk", 00:17:53.268 "block_size": 512, 00:17:53.268 "num_blocks": 65536, 00:17:53.268 "uuid": "3a36753d-4d26-4dc9-97e4-0550d1dbea76", 00:17:53.268 "assigned_rate_limits": { 00:17:53.268 "rw_ios_per_sec": 0, 00:17:53.268 "rw_mbytes_per_sec": 0, 00:17:53.268 "r_mbytes_per_sec": 0, 00:17:53.268 "w_mbytes_per_sec": 0 00:17:53.268 }, 00:17:53.268 "claimed": false, 00:17:53.268 "zoned": false, 00:17:53.268 "supported_io_types": { 00:17:53.268 "read": true, 00:17:53.268 "write": true, 00:17:53.268 "unmap": true, 00:17:53.268 "flush": true, 00:17:53.268 "reset": true, 00:17:53.268 "nvme_admin": false, 00:17:53.268 "nvme_io": false, 00:17:53.268 "nvme_io_md": false, 00:17:53.268 "write_zeroes": true, 00:17:53.268 "zcopy": true, 00:17:53.268 "get_zone_info": false, 00:17:53.268 "zone_management": false, 00:17:53.268 "zone_append": false, 00:17:53.268 "compare": false, 00:17:53.268 "compare_and_write": false, 00:17:53.268 "abort": true, 00:17:53.268 "seek_hole": false, 00:17:53.268 "seek_data": false, 00:17:53.268 "copy": true, 00:17:53.268 "nvme_iov_md": false 00:17:53.268 }, 00:17:53.268 "memory_domains": [ 00:17:53.268 { 00:17:53.268 "dma_device_id": "system", 00:17:53.268 "dma_device_type": 1 00:17:53.268 }, 00:17:53.268 { 00:17:53.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.268 "dma_device_type": 2 00:17:53.268 } 00:17:53.268 ], 00:17:53.268 "driver_specific": {} 00:17:53.268 } 00:17:53.268 ] 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.268 BaseBdev3 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.268 [ 00:17:53.268 { 00:17:53.268 "name": "BaseBdev3", 00:17:53.268 "aliases": [ 00:17:53.268 "13b252e7-344e-4ae5-85c8-abd31b1bf1fa" 00:17:53.268 ], 00:17:53.268 "product_name": "Malloc disk", 00:17:53.268 "block_size": 512, 00:17:53.268 "num_blocks": 65536, 00:17:53.268 "uuid": "13b252e7-344e-4ae5-85c8-abd31b1bf1fa", 00:17:53.268 "assigned_rate_limits": { 00:17:53.268 "rw_ios_per_sec": 0, 00:17:53.268 "rw_mbytes_per_sec": 0, 00:17:53.268 "r_mbytes_per_sec": 0, 00:17:53.268 "w_mbytes_per_sec": 0 00:17:53.268 }, 00:17:53.268 "claimed": false, 00:17:53.268 "zoned": false, 00:17:53.268 "supported_io_types": { 00:17:53.268 "read": true, 00:17:53.268 "write": true, 00:17:53.268 "unmap": true, 00:17:53.268 "flush": true, 00:17:53.268 "reset": true, 00:17:53.268 "nvme_admin": false, 00:17:53.268 "nvme_io": false, 00:17:53.268 "nvme_io_md": false, 00:17:53.268 "write_zeroes": true, 00:17:53.268 "zcopy": true, 00:17:53.268 "get_zone_info": false, 00:17:53.268 "zone_management": false, 00:17:53.268 "zone_append": false, 00:17:53.268 "compare": false, 00:17:53.268 "compare_and_write": false, 00:17:53.268 "abort": true, 00:17:53.268 "seek_hole": false, 00:17:53.268 "seek_data": false, 00:17:53.268 "copy": true, 00:17:53.268 "nvme_iov_md": false 00:17:53.268 }, 00:17:53.268 "memory_domains": [ 00:17:53.268 { 00:17:53.268 "dma_device_id": "system", 00:17:53.268 "dma_device_type": 1 00:17:53.268 }, 00:17:53.268 { 00:17:53.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.268 "dma_device_type": 2 00:17:53.268 } 00:17:53.268 ], 00:17:53.268 "driver_specific": {} 00:17:53.268 } 00:17:53.268 ] 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.268 BaseBdev4 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.268 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.268 [ 00:17:53.268 { 00:17:53.268 "name": "BaseBdev4", 00:17:53.268 "aliases": [ 00:17:53.268 "97228591-7e73-4274-97df-2fd36fa7331b" 00:17:53.268 ], 00:17:53.268 "product_name": "Malloc disk", 00:17:53.268 "block_size": 512, 00:17:53.268 "num_blocks": 65536, 00:17:53.268 "uuid": "97228591-7e73-4274-97df-2fd36fa7331b", 00:17:53.268 "assigned_rate_limits": { 00:17:53.268 "rw_ios_per_sec": 0, 00:17:53.268 "rw_mbytes_per_sec": 0, 00:17:53.268 "r_mbytes_per_sec": 0, 00:17:53.268 "w_mbytes_per_sec": 0 00:17:53.268 }, 00:17:53.268 "claimed": false, 00:17:53.268 "zoned": false, 00:17:53.268 "supported_io_types": { 00:17:53.268 "read": true, 00:17:53.268 "write": true, 00:17:53.268 "unmap": true, 00:17:53.268 "flush": true, 00:17:53.268 "reset": true, 00:17:53.268 "nvme_admin": false, 00:17:53.268 "nvme_io": false, 00:17:53.268 "nvme_io_md": false, 00:17:53.268 "write_zeroes": true, 00:17:53.268 "zcopy": true, 00:17:53.268 "get_zone_info": false, 00:17:53.268 "zone_management": false, 00:17:53.268 "zone_append": false, 00:17:53.268 "compare": false, 00:17:53.268 "compare_and_write": false, 00:17:53.268 "abort": true, 00:17:53.268 "seek_hole": false, 00:17:53.268 "seek_data": false, 00:17:53.268 "copy": true, 00:17:53.268 "nvme_iov_md": false 00:17:53.268 }, 00:17:53.268 "memory_domains": [ 00:17:53.268 { 00:17:53.269 "dma_device_id": "system", 00:17:53.269 "dma_device_type": 1 00:17:53.269 }, 00:17:53.269 { 00:17:53.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.269 "dma_device_type": 2 00:17:53.269 } 00:17:53.269 ], 00:17:53.269 "driver_specific": {} 00:17:53.269 } 00:17:53.269 ] 00:17:53.269 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.269 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:53.269 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:53.269 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:53.269 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:53.269 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.269 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.269 [2024-10-15 09:17:37.193833] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:53.528 [2024-10-15 09:17:37.194108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:53.528 [2024-10-15 09:17:37.194170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:53.528 [2024-10-15 09:17:37.196791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:53.528 [2024-10-15 09:17:37.196859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.528 "name": "Existed_Raid", 00:17:53.528 "uuid": "93527e97-934e-46da-9282-178253229cd7", 00:17:53.528 "strip_size_kb": 0, 00:17:53.528 "state": "configuring", 00:17:53.528 "raid_level": "raid1", 00:17:53.528 "superblock": true, 00:17:53.528 "num_base_bdevs": 4, 00:17:53.528 "num_base_bdevs_discovered": 3, 00:17:53.528 "num_base_bdevs_operational": 4, 00:17:53.528 "base_bdevs_list": [ 00:17:53.528 { 00:17:53.528 "name": "BaseBdev1", 00:17:53.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.528 "is_configured": false, 00:17:53.528 "data_offset": 0, 00:17:53.528 "data_size": 0 00:17:53.528 }, 00:17:53.528 { 00:17:53.528 "name": "BaseBdev2", 00:17:53.528 "uuid": "3a36753d-4d26-4dc9-97e4-0550d1dbea76", 00:17:53.528 "is_configured": true, 00:17:53.528 "data_offset": 2048, 00:17:53.528 "data_size": 63488 00:17:53.528 }, 00:17:53.528 { 00:17:53.528 "name": "BaseBdev3", 00:17:53.528 "uuid": "13b252e7-344e-4ae5-85c8-abd31b1bf1fa", 00:17:53.528 "is_configured": true, 00:17:53.528 "data_offset": 2048, 00:17:53.528 "data_size": 63488 00:17:53.528 }, 00:17:53.528 { 00:17:53.528 "name": "BaseBdev4", 00:17:53.528 "uuid": "97228591-7e73-4274-97df-2fd36fa7331b", 00:17:53.528 "is_configured": true, 00:17:53.528 "data_offset": 2048, 00:17:53.528 "data_size": 63488 00:17:53.528 } 00:17:53.528 ] 00:17:53.528 }' 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.528 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.108 [2024-10-15 09:17:37.726046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.108 "name": "Existed_Raid", 00:17:54.108 "uuid": "93527e97-934e-46da-9282-178253229cd7", 00:17:54.108 "strip_size_kb": 0, 00:17:54.108 "state": "configuring", 00:17:54.108 "raid_level": "raid1", 00:17:54.108 "superblock": true, 00:17:54.108 "num_base_bdevs": 4, 00:17:54.108 "num_base_bdevs_discovered": 2, 00:17:54.108 "num_base_bdevs_operational": 4, 00:17:54.108 "base_bdevs_list": [ 00:17:54.108 { 00:17:54.108 "name": "BaseBdev1", 00:17:54.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.108 "is_configured": false, 00:17:54.108 "data_offset": 0, 00:17:54.108 "data_size": 0 00:17:54.108 }, 00:17:54.108 { 00:17:54.108 "name": null, 00:17:54.108 "uuid": "3a36753d-4d26-4dc9-97e4-0550d1dbea76", 00:17:54.108 "is_configured": false, 00:17:54.108 "data_offset": 0, 00:17:54.108 "data_size": 63488 00:17:54.108 }, 00:17:54.108 { 00:17:54.108 "name": "BaseBdev3", 00:17:54.108 "uuid": "13b252e7-344e-4ae5-85c8-abd31b1bf1fa", 00:17:54.108 "is_configured": true, 00:17:54.108 "data_offset": 2048, 00:17:54.108 "data_size": 63488 00:17:54.108 }, 00:17:54.108 { 00:17:54.108 "name": "BaseBdev4", 00:17:54.108 "uuid": "97228591-7e73-4274-97df-2fd36fa7331b", 00:17:54.108 "is_configured": true, 00:17:54.108 "data_offset": 2048, 00:17:54.108 "data_size": 63488 00:17:54.108 } 00:17:54.108 ] 00:17:54.108 }' 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.108 09:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.376 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.376 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:54.376 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.376 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.376 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.635 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:54.635 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:54.635 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.635 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.635 [2024-10-15 09:17:38.363852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.635 BaseBdev1 00:17:54.635 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.635 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:54.635 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:54.635 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:54.635 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:54.635 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:54.635 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:54.635 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:54.635 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.635 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.635 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.635 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:54.635 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.635 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.635 [ 00:17:54.635 { 00:17:54.635 "name": "BaseBdev1", 00:17:54.636 "aliases": [ 00:17:54.636 "215b1d6f-54d6-44c2-bda3-0a14d2b08e60" 00:17:54.636 ], 00:17:54.636 "product_name": "Malloc disk", 00:17:54.636 "block_size": 512, 00:17:54.636 "num_blocks": 65536, 00:17:54.636 "uuid": "215b1d6f-54d6-44c2-bda3-0a14d2b08e60", 00:17:54.636 "assigned_rate_limits": { 00:17:54.636 "rw_ios_per_sec": 0, 00:17:54.636 "rw_mbytes_per_sec": 0, 00:17:54.636 "r_mbytes_per_sec": 0, 00:17:54.636 "w_mbytes_per_sec": 0 00:17:54.636 }, 00:17:54.636 "claimed": true, 00:17:54.636 "claim_type": "exclusive_write", 00:17:54.636 "zoned": false, 00:17:54.636 "supported_io_types": { 00:17:54.636 "read": true, 00:17:54.636 "write": true, 00:17:54.636 "unmap": true, 00:17:54.636 "flush": true, 00:17:54.636 "reset": true, 00:17:54.636 "nvme_admin": false, 00:17:54.636 "nvme_io": false, 00:17:54.636 "nvme_io_md": false, 00:17:54.636 "write_zeroes": true, 00:17:54.636 "zcopy": true, 00:17:54.636 "get_zone_info": false, 00:17:54.636 "zone_management": false, 00:17:54.636 "zone_append": false, 00:17:54.636 "compare": false, 00:17:54.636 "compare_and_write": false, 00:17:54.636 "abort": true, 00:17:54.636 "seek_hole": false, 00:17:54.636 "seek_data": false, 00:17:54.636 "copy": true, 00:17:54.636 "nvme_iov_md": false 00:17:54.636 }, 00:17:54.636 "memory_domains": [ 00:17:54.636 { 00:17:54.636 "dma_device_id": "system", 00:17:54.636 "dma_device_type": 1 00:17:54.636 }, 00:17:54.636 { 00:17:54.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.636 "dma_device_type": 2 00:17:54.636 } 00:17:54.636 ], 00:17:54.636 "driver_specific": {} 00:17:54.636 } 00:17:54.636 ] 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.636 "name": "Existed_Raid", 00:17:54.636 "uuid": "93527e97-934e-46da-9282-178253229cd7", 00:17:54.636 "strip_size_kb": 0, 00:17:54.636 "state": "configuring", 00:17:54.636 "raid_level": "raid1", 00:17:54.636 "superblock": true, 00:17:54.636 "num_base_bdevs": 4, 00:17:54.636 "num_base_bdevs_discovered": 3, 00:17:54.636 "num_base_bdevs_operational": 4, 00:17:54.636 "base_bdevs_list": [ 00:17:54.636 { 00:17:54.636 "name": "BaseBdev1", 00:17:54.636 "uuid": "215b1d6f-54d6-44c2-bda3-0a14d2b08e60", 00:17:54.636 "is_configured": true, 00:17:54.636 "data_offset": 2048, 00:17:54.636 "data_size": 63488 00:17:54.636 }, 00:17:54.636 { 00:17:54.636 "name": null, 00:17:54.636 "uuid": "3a36753d-4d26-4dc9-97e4-0550d1dbea76", 00:17:54.636 "is_configured": false, 00:17:54.636 "data_offset": 0, 00:17:54.636 "data_size": 63488 00:17:54.636 }, 00:17:54.636 { 00:17:54.636 "name": "BaseBdev3", 00:17:54.636 "uuid": "13b252e7-344e-4ae5-85c8-abd31b1bf1fa", 00:17:54.636 "is_configured": true, 00:17:54.636 "data_offset": 2048, 00:17:54.636 "data_size": 63488 00:17:54.636 }, 00:17:54.636 { 00:17:54.636 "name": "BaseBdev4", 00:17:54.636 "uuid": "97228591-7e73-4274-97df-2fd36fa7331b", 00:17:54.636 "is_configured": true, 00:17:54.636 "data_offset": 2048, 00:17:54.636 "data_size": 63488 00:17:54.636 } 00:17:54.636 ] 00:17:54.636 }' 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.636 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.200 [2024-10-15 09:17:38.992179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.200 09:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.200 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.200 09:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.200 09:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.200 09:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.200 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.200 "name": "Existed_Raid", 00:17:55.200 "uuid": "93527e97-934e-46da-9282-178253229cd7", 00:17:55.200 "strip_size_kb": 0, 00:17:55.200 "state": "configuring", 00:17:55.200 "raid_level": "raid1", 00:17:55.200 "superblock": true, 00:17:55.200 "num_base_bdevs": 4, 00:17:55.200 "num_base_bdevs_discovered": 2, 00:17:55.200 "num_base_bdevs_operational": 4, 00:17:55.200 "base_bdevs_list": [ 00:17:55.200 { 00:17:55.200 "name": "BaseBdev1", 00:17:55.200 "uuid": "215b1d6f-54d6-44c2-bda3-0a14d2b08e60", 00:17:55.200 "is_configured": true, 00:17:55.200 "data_offset": 2048, 00:17:55.200 "data_size": 63488 00:17:55.200 }, 00:17:55.200 { 00:17:55.200 "name": null, 00:17:55.200 "uuid": "3a36753d-4d26-4dc9-97e4-0550d1dbea76", 00:17:55.200 "is_configured": false, 00:17:55.200 "data_offset": 0, 00:17:55.200 "data_size": 63488 00:17:55.200 }, 00:17:55.200 { 00:17:55.200 "name": null, 00:17:55.200 "uuid": "13b252e7-344e-4ae5-85c8-abd31b1bf1fa", 00:17:55.200 "is_configured": false, 00:17:55.200 "data_offset": 0, 00:17:55.200 "data_size": 63488 00:17:55.200 }, 00:17:55.200 { 00:17:55.200 "name": "BaseBdev4", 00:17:55.200 "uuid": "97228591-7e73-4274-97df-2fd36fa7331b", 00:17:55.200 "is_configured": true, 00:17:55.200 "data_offset": 2048, 00:17:55.200 "data_size": 63488 00:17:55.200 } 00:17:55.200 ] 00:17:55.200 }' 00:17:55.200 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.200 09:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.765 [2024-10-15 09:17:39.572319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.765 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.765 "name": "Existed_Raid", 00:17:55.765 "uuid": "93527e97-934e-46da-9282-178253229cd7", 00:17:55.765 "strip_size_kb": 0, 00:17:55.765 "state": "configuring", 00:17:55.765 "raid_level": "raid1", 00:17:55.765 "superblock": true, 00:17:55.765 "num_base_bdevs": 4, 00:17:55.765 "num_base_bdevs_discovered": 3, 00:17:55.765 "num_base_bdevs_operational": 4, 00:17:55.765 "base_bdevs_list": [ 00:17:55.765 { 00:17:55.765 "name": "BaseBdev1", 00:17:55.765 "uuid": "215b1d6f-54d6-44c2-bda3-0a14d2b08e60", 00:17:55.765 "is_configured": true, 00:17:55.765 "data_offset": 2048, 00:17:55.765 "data_size": 63488 00:17:55.765 }, 00:17:55.765 { 00:17:55.765 "name": null, 00:17:55.766 "uuid": "3a36753d-4d26-4dc9-97e4-0550d1dbea76", 00:17:55.766 "is_configured": false, 00:17:55.766 "data_offset": 0, 00:17:55.766 "data_size": 63488 00:17:55.766 }, 00:17:55.766 { 00:17:55.766 "name": "BaseBdev3", 00:17:55.766 "uuid": "13b252e7-344e-4ae5-85c8-abd31b1bf1fa", 00:17:55.766 "is_configured": true, 00:17:55.766 "data_offset": 2048, 00:17:55.766 "data_size": 63488 00:17:55.766 }, 00:17:55.766 { 00:17:55.766 "name": "BaseBdev4", 00:17:55.766 "uuid": "97228591-7e73-4274-97df-2fd36fa7331b", 00:17:55.766 "is_configured": true, 00:17:55.766 "data_offset": 2048, 00:17:55.766 "data_size": 63488 00:17:55.766 } 00:17:55.766 ] 00:17:55.766 }' 00:17:55.766 09:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.766 09:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.332 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.332 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.332 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:56.332 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.332 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.332 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:56.332 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:56.332 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.332 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.332 [2024-10-15 09:17:40.160507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.591 "name": "Existed_Raid", 00:17:56.591 "uuid": "93527e97-934e-46da-9282-178253229cd7", 00:17:56.591 "strip_size_kb": 0, 00:17:56.591 "state": "configuring", 00:17:56.591 "raid_level": "raid1", 00:17:56.591 "superblock": true, 00:17:56.591 "num_base_bdevs": 4, 00:17:56.591 "num_base_bdevs_discovered": 2, 00:17:56.591 "num_base_bdevs_operational": 4, 00:17:56.591 "base_bdevs_list": [ 00:17:56.591 { 00:17:56.591 "name": null, 00:17:56.591 "uuid": "215b1d6f-54d6-44c2-bda3-0a14d2b08e60", 00:17:56.591 "is_configured": false, 00:17:56.591 "data_offset": 0, 00:17:56.591 "data_size": 63488 00:17:56.591 }, 00:17:56.591 { 00:17:56.591 "name": null, 00:17:56.591 "uuid": "3a36753d-4d26-4dc9-97e4-0550d1dbea76", 00:17:56.591 "is_configured": false, 00:17:56.591 "data_offset": 0, 00:17:56.591 "data_size": 63488 00:17:56.591 }, 00:17:56.591 { 00:17:56.591 "name": "BaseBdev3", 00:17:56.591 "uuid": "13b252e7-344e-4ae5-85c8-abd31b1bf1fa", 00:17:56.591 "is_configured": true, 00:17:56.591 "data_offset": 2048, 00:17:56.591 "data_size": 63488 00:17:56.591 }, 00:17:56.591 { 00:17:56.591 "name": "BaseBdev4", 00:17:56.591 "uuid": "97228591-7e73-4274-97df-2fd36fa7331b", 00:17:56.591 "is_configured": true, 00:17:56.591 "data_offset": 2048, 00:17:56.591 "data_size": 63488 00:17:56.591 } 00:17:56.591 ] 00:17:56.591 }' 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.591 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.157 [2024-10-15 09:17:40.864431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.157 "name": "Existed_Raid", 00:17:57.157 "uuid": "93527e97-934e-46da-9282-178253229cd7", 00:17:57.157 "strip_size_kb": 0, 00:17:57.157 "state": "configuring", 00:17:57.157 "raid_level": "raid1", 00:17:57.157 "superblock": true, 00:17:57.157 "num_base_bdevs": 4, 00:17:57.157 "num_base_bdevs_discovered": 3, 00:17:57.157 "num_base_bdevs_operational": 4, 00:17:57.157 "base_bdevs_list": [ 00:17:57.157 { 00:17:57.157 "name": null, 00:17:57.157 "uuid": "215b1d6f-54d6-44c2-bda3-0a14d2b08e60", 00:17:57.157 "is_configured": false, 00:17:57.157 "data_offset": 0, 00:17:57.157 "data_size": 63488 00:17:57.157 }, 00:17:57.157 { 00:17:57.157 "name": "BaseBdev2", 00:17:57.157 "uuid": "3a36753d-4d26-4dc9-97e4-0550d1dbea76", 00:17:57.157 "is_configured": true, 00:17:57.157 "data_offset": 2048, 00:17:57.157 "data_size": 63488 00:17:57.157 }, 00:17:57.157 { 00:17:57.157 "name": "BaseBdev3", 00:17:57.157 "uuid": "13b252e7-344e-4ae5-85c8-abd31b1bf1fa", 00:17:57.157 "is_configured": true, 00:17:57.157 "data_offset": 2048, 00:17:57.157 "data_size": 63488 00:17:57.157 }, 00:17:57.157 { 00:17:57.157 "name": "BaseBdev4", 00:17:57.157 "uuid": "97228591-7e73-4274-97df-2fd36fa7331b", 00:17:57.157 "is_configured": true, 00:17:57.157 "data_offset": 2048, 00:17:57.157 "data_size": 63488 00:17:57.157 } 00:17:57.157 ] 00:17:57.157 }' 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.157 09:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.724 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:57.724 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.724 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.724 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.724 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.724 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:57.724 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:57.724 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.724 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.724 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.724 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.724 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 215b1d6f-54d6-44c2-bda3-0a14d2b08e60 00:17:57.724 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.724 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.724 [2024-10-15 09:17:41.530266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:57.724 [2024-10-15 09:17:41.530770] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:57.724 [2024-10-15 09:17:41.530802] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:57.724 NewBaseBdev 00:17:57.724 [2024-10-15 09:17:41.531170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:57.724 [2024-10-15 09:17:41.531380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:57.725 [2024-10-15 09:17:41.531397] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:57.725 [2024-10-15 09:17:41.531577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.725 [ 00:17:57.725 { 00:17:57.725 "name": "NewBaseBdev", 00:17:57.725 "aliases": [ 00:17:57.725 "215b1d6f-54d6-44c2-bda3-0a14d2b08e60" 00:17:57.725 ], 00:17:57.725 "product_name": "Malloc disk", 00:17:57.725 "block_size": 512, 00:17:57.725 "num_blocks": 65536, 00:17:57.725 "uuid": "215b1d6f-54d6-44c2-bda3-0a14d2b08e60", 00:17:57.725 "assigned_rate_limits": { 00:17:57.725 "rw_ios_per_sec": 0, 00:17:57.725 "rw_mbytes_per_sec": 0, 00:17:57.725 "r_mbytes_per_sec": 0, 00:17:57.725 "w_mbytes_per_sec": 0 00:17:57.725 }, 00:17:57.725 "claimed": true, 00:17:57.725 "claim_type": "exclusive_write", 00:17:57.725 "zoned": false, 00:17:57.725 "supported_io_types": { 00:17:57.725 "read": true, 00:17:57.725 "write": true, 00:17:57.725 "unmap": true, 00:17:57.725 "flush": true, 00:17:57.725 "reset": true, 00:17:57.725 "nvme_admin": false, 00:17:57.725 "nvme_io": false, 00:17:57.725 "nvme_io_md": false, 00:17:57.725 "write_zeroes": true, 00:17:57.725 "zcopy": true, 00:17:57.725 "get_zone_info": false, 00:17:57.725 "zone_management": false, 00:17:57.725 "zone_append": false, 00:17:57.725 "compare": false, 00:17:57.725 "compare_and_write": false, 00:17:57.725 "abort": true, 00:17:57.725 "seek_hole": false, 00:17:57.725 "seek_data": false, 00:17:57.725 "copy": true, 00:17:57.725 "nvme_iov_md": false 00:17:57.725 }, 00:17:57.725 "memory_domains": [ 00:17:57.725 { 00:17:57.725 "dma_device_id": "system", 00:17:57.725 "dma_device_type": 1 00:17:57.725 }, 00:17:57.725 { 00:17:57.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.725 "dma_device_type": 2 00:17:57.725 } 00:17:57.725 ], 00:17:57.725 "driver_specific": {} 00:17:57.725 } 00:17:57.725 ] 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.725 "name": "Existed_Raid", 00:17:57.725 "uuid": "93527e97-934e-46da-9282-178253229cd7", 00:17:57.725 "strip_size_kb": 0, 00:17:57.725 "state": "online", 00:17:57.725 "raid_level": "raid1", 00:17:57.725 "superblock": true, 00:17:57.725 "num_base_bdevs": 4, 00:17:57.725 "num_base_bdevs_discovered": 4, 00:17:57.725 "num_base_bdevs_operational": 4, 00:17:57.725 "base_bdevs_list": [ 00:17:57.725 { 00:17:57.725 "name": "NewBaseBdev", 00:17:57.725 "uuid": "215b1d6f-54d6-44c2-bda3-0a14d2b08e60", 00:17:57.725 "is_configured": true, 00:17:57.725 "data_offset": 2048, 00:17:57.725 "data_size": 63488 00:17:57.725 }, 00:17:57.725 { 00:17:57.725 "name": "BaseBdev2", 00:17:57.725 "uuid": "3a36753d-4d26-4dc9-97e4-0550d1dbea76", 00:17:57.725 "is_configured": true, 00:17:57.725 "data_offset": 2048, 00:17:57.725 "data_size": 63488 00:17:57.725 }, 00:17:57.725 { 00:17:57.725 "name": "BaseBdev3", 00:17:57.725 "uuid": "13b252e7-344e-4ae5-85c8-abd31b1bf1fa", 00:17:57.725 "is_configured": true, 00:17:57.725 "data_offset": 2048, 00:17:57.725 "data_size": 63488 00:17:57.725 }, 00:17:57.725 { 00:17:57.725 "name": "BaseBdev4", 00:17:57.725 "uuid": "97228591-7e73-4274-97df-2fd36fa7331b", 00:17:57.725 "is_configured": true, 00:17:57.725 "data_offset": 2048, 00:17:57.725 "data_size": 63488 00:17:57.725 } 00:17:57.725 ] 00:17:57.725 }' 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.725 09:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.294 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:58.294 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:58.294 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:58.294 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:58.294 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:58.294 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:58.294 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:58.294 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.294 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.294 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:58.294 [2024-10-15 09:17:42.090943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.294 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.294 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:58.294 "name": "Existed_Raid", 00:17:58.294 "aliases": [ 00:17:58.294 "93527e97-934e-46da-9282-178253229cd7" 00:17:58.294 ], 00:17:58.294 "product_name": "Raid Volume", 00:17:58.294 "block_size": 512, 00:17:58.294 "num_blocks": 63488, 00:17:58.294 "uuid": "93527e97-934e-46da-9282-178253229cd7", 00:17:58.294 "assigned_rate_limits": { 00:17:58.294 "rw_ios_per_sec": 0, 00:17:58.294 "rw_mbytes_per_sec": 0, 00:17:58.294 "r_mbytes_per_sec": 0, 00:17:58.294 "w_mbytes_per_sec": 0 00:17:58.294 }, 00:17:58.294 "claimed": false, 00:17:58.294 "zoned": false, 00:17:58.294 "supported_io_types": { 00:17:58.294 "read": true, 00:17:58.294 "write": true, 00:17:58.294 "unmap": false, 00:17:58.294 "flush": false, 00:17:58.294 "reset": true, 00:17:58.294 "nvme_admin": false, 00:17:58.294 "nvme_io": false, 00:17:58.294 "nvme_io_md": false, 00:17:58.294 "write_zeroes": true, 00:17:58.294 "zcopy": false, 00:17:58.294 "get_zone_info": false, 00:17:58.294 "zone_management": false, 00:17:58.294 "zone_append": false, 00:17:58.294 "compare": false, 00:17:58.294 "compare_and_write": false, 00:17:58.294 "abort": false, 00:17:58.294 "seek_hole": false, 00:17:58.294 "seek_data": false, 00:17:58.294 "copy": false, 00:17:58.294 "nvme_iov_md": false 00:17:58.294 }, 00:17:58.294 "memory_domains": [ 00:17:58.294 { 00:17:58.294 "dma_device_id": "system", 00:17:58.294 "dma_device_type": 1 00:17:58.294 }, 00:17:58.294 { 00:17:58.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.294 "dma_device_type": 2 00:17:58.294 }, 00:17:58.294 { 00:17:58.294 "dma_device_id": "system", 00:17:58.294 "dma_device_type": 1 00:17:58.294 }, 00:17:58.294 { 00:17:58.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.294 "dma_device_type": 2 00:17:58.294 }, 00:17:58.294 { 00:17:58.294 "dma_device_id": "system", 00:17:58.294 "dma_device_type": 1 00:17:58.294 }, 00:17:58.294 { 00:17:58.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.294 "dma_device_type": 2 00:17:58.294 }, 00:17:58.294 { 00:17:58.294 "dma_device_id": "system", 00:17:58.294 "dma_device_type": 1 00:17:58.294 }, 00:17:58.294 { 00:17:58.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.294 "dma_device_type": 2 00:17:58.294 } 00:17:58.294 ], 00:17:58.294 "driver_specific": { 00:17:58.294 "raid": { 00:17:58.294 "uuid": "93527e97-934e-46da-9282-178253229cd7", 00:17:58.294 "strip_size_kb": 0, 00:17:58.294 "state": "online", 00:17:58.294 "raid_level": "raid1", 00:17:58.294 "superblock": true, 00:17:58.294 "num_base_bdevs": 4, 00:17:58.294 "num_base_bdevs_discovered": 4, 00:17:58.294 "num_base_bdevs_operational": 4, 00:17:58.294 "base_bdevs_list": [ 00:17:58.294 { 00:17:58.294 "name": "NewBaseBdev", 00:17:58.294 "uuid": "215b1d6f-54d6-44c2-bda3-0a14d2b08e60", 00:17:58.294 "is_configured": true, 00:17:58.294 "data_offset": 2048, 00:17:58.294 "data_size": 63488 00:17:58.294 }, 00:17:58.294 { 00:17:58.294 "name": "BaseBdev2", 00:17:58.294 "uuid": "3a36753d-4d26-4dc9-97e4-0550d1dbea76", 00:17:58.294 "is_configured": true, 00:17:58.294 "data_offset": 2048, 00:17:58.294 "data_size": 63488 00:17:58.294 }, 00:17:58.294 { 00:17:58.294 "name": "BaseBdev3", 00:17:58.294 "uuid": "13b252e7-344e-4ae5-85c8-abd31b1bf1fa", 00:17:58.294 "is_configured": true, 00:17:58.294 "data_offset": 2048, 00:17:58.294 "data_size": 63488 00:17:58.294 }, 00:17:58.294 { 00:17:58.294 "name": "BaseBdev4", 00:17:58.294 "uuid": "97228591-7e73-4274-97df-2fd36fa7331b", 00:17:58.294 "is_configured": true, 00:17:58.294 "data_offset": 2048, 00:17:58.294 "data_size": 63488 00:17:58.294 } 00:17:58.294 ] 00:17:58.294 } 00:17:58.294 } 00:17:58.294 }' 00:17:58.294 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:58.294 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:58.294 BaseBdev2 00:17:58.294 BaseBdev3 00:17:58.294 BaseBdev4' 00:17:58.294 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.619 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.620 [2024-10-15 09:17:42.458596] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:58.620 [2024-10-15 09:17:42.458756] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.620 [2024-10-15 09:17:42.458907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.620 [2024-10-15 09:17:42.459328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.620 [2024-10-15 09:17:42.459353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74244 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74244 ']' 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74244 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74244 00:17:58.620 killing process with pid 74244 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74244' 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74244 00:17:58.620 [2024-10-15 09:17:42.495787] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.620 09:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74244 00:17:59.208 [2024-10-15 09:17:42.878513] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:00.145 09:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:00.145 00:18:00.145 real 0m13.273s 00:18:00.145 user 0m21.779s 00:18:00.145 sys 0m1.993s 00:18:00.145 09:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.145 09:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.145 ************************************ 00:18:00.145 END TEST raid_state_function_test_sb 00:18:00.145 ************************************ 00:18:00.145 09:17:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:18:00.145 09:17:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:00.145 09:17:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:00.145 09:17:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:00.145 ************************************ 00:18:00.145 START TEST raid_superblock_test 00:18:00.145 ************************************ 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74931 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74931 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74931 ']' 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:00.145 09:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.404 [2024-10-15 09:17:44.163323] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:18:00.404 [2024-10-15 09:17:44.163546] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74931 ] 00:18:00.662 [2024-10-15 09:17:44.345319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.662 [2024-10-15 09:17:44.513298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.920 [2024-10-15 09:17:44.747798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.920 [2024-10-15 09:17:44.747863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.179 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:01.179 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:18:01.179 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:01.179 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:01.179 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:01.179 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:01.179 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:01.179 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:01.179 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:01.179 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:01.179 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:01.179 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.179 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.437 malloc1 00:18:01.437 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.437 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:01.437 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.437 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.437 [2024-10-15 09:17:45.133484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:01.437 [2024-10-15 09:17:45.133715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.438 [2024-10-15 09:17:45.133806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:01.438 [2024-10-15 09:17:45.133831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.438 [2024-10-15 09:17:45.136810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.438 [2024-10-15 09:17:45.136856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:01.438 pt1 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.438 malloc2 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.438 [2024-10-15 09:17:45.193366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:01.438 [2024-10-15 09:17:45.193447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.438 [2024-10-15 09:17:45.193481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:01.438 [2024-10-15 09:17:45.193497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.438 [2024-10-15 09:17:45.196463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.438 [2024-10-15 09:17:45.196507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:01.438 pt2 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.438 malloc3 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.438 [2024-10-15 09:17:45.260479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:01.438 [2024-10-15 09:17:45.260550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.438 [2024-10-15 09:17:45.260585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:01.438 [2024-10-15 09:17:45.260601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.438 [2024-10-15 09:17:45.263495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.438 [2024-10-15 09:17:45.263663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:01.438 pt3 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.438 malloc4 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.438 [2024-10-15 09:17:45.319877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:01.438 [2024-10-15 09:17:45.319949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.438 [2024-10-15 09:17:45.319989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:01.438 [2024-10-15 09:17:45.320007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.438 [2024-10-15 09:17:45.323007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.438 [2024-10-15 09:17:45.323192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:01.438 pt4 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.438 [2024-10-15 09:17:45.332092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:01.438 [2024-10-15 09:17:45.334684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:01.438 [2024-10-15 09:17:45.334908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:01.438 [2024-10-15 09:17:45.334991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:01.438 [2024-10-15 09:17:45.335282] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:01.438 [2024-10-15 09:17:45.335302] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:01.438 [2024-10-15 09:17:45.335696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:01.438 [2024-10-15 09:17:45.335928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:01.438 [2024-10-15 09:17:45.335950] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:01.438 [2024-10-15 09:17:45.336222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.438 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.697 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.697 "name": "raid_bdev1", 00:18:01.697 "uuid": "fac07b6b-a2e8-4426-aaaa-e3affa912c6e", 00:18:01.697 "strip_size_kb": 0, 00:18:01.697 "state": "online", 00:18:01.697 "raid_level": "raid1", 00:18:01.697 "superblock": true, 00:18:01.697 "num_base_bdevs": 4, 00:18:01.697 "num_base_bdevs_discovered": 4, 00:18:01.697 "num_base_bdevs_operational": 4, 00:18:01.697 "base_bdevs_list": [ 00:18:01.697 { 00:18:01.697 "name": "pt1", 00:18:01.697 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.697 "is_configured": true, 00:18:01.697 "data_offset": 2048, 00:18:01.697 "data_size": 63488 00:18:01.697 }, 00:18:01.697 { 00:18:01.697 "name": "pt2", 00:18:01.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.697 "is_configured": true, 00:18:01.697 "data_offset": 2048, 00:18:01.697 "data_size": 63488 00:18:01.697 }, 00:18:01.697 { 00:18:01.697 "name": "pt3", 00:18:01.697 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:01.697 "is_configured": true, 00:18:01.697 "data_offset": 2048, 00:18:01.697 "data_size": 63488 00:18:01.697 }, 00:18:01.697 { 00:18:01.697 "name": "pt4", 00:18:01.697 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:01.697 "is_configured": true, 00:18:01.697 "data_offset": 2048, 00:18:01.697 "data_size": 63488 00:18:01.697 } 00:18:01.697 ] 00:18:01.697 }' 00:18:01.697 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.697 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.955 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:01.955 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:01.955 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:01.956 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:01.956 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:01.956 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:01.956 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.956 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:01.956 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.956 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.956 [2024-10-15 09:17:45.844732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.956 09:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.214 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:02.214 "name": "raid_bdev1", 00:18:02.214 "aliases": [ 00:18:02.214 "fac07b6b-a2e8-4426-aaaa-e3affa912c6e" 00:18:02.214 ], 00:18:02.214 "product_name": "Raid Volume", 00:18:02.214 "block_size": 512, 00:18:02.214 "num_blocks": 63488, 00:18:02.214 "uuid": "fac07b6b-a2e8-4426-aaaa-e3affa912c6e", 00:18:02.214 "assigned_rate_limits": { 00:18:02.214 "rw_ios_per_sec": 0, 00:18:02.214 "rw_mbytes_per_sec": 0, 00:18:02.214 "r_mbytes_per_sec": 0, 00:18:02.214 "w_mbytes_per_sec": 0 00:18:02.214 }, 00:18:02.214 "claimed": false, 00:18:02.214 "zoned": false, 00:18:02.214 "supported_io_types": { 00:18:02.214 "read": true, 00:18:02.214 "write": true, 00:18:02.214 "unmap": false, 00:18:02.214 "flush": false, 00:18:02.214 "reset": true, 00:18:02.214 "nvme_admin": false, 00:18:02.214 "nvme_io": false, 00:18:02.214 "nvme_io_md": false, 00:18:02.214 "write_zeroes": true, 00:18:02.214 "zcopy": false, 00:18:02.214 "get_zone_info": false, 00:18:02.214 "zone_management": false, 00:18:02.214 "zone_append": false, 00:18:02.214 "compare": false, 00:18:02.214 "compare_and_write": false, 00:18:02.214 "abort": false, 00:18:02.214 "seek_hole": false, 00:18:02.214 "seek_data": false, 00:18:02.214 "copy": false, 00:18:02.214 "nvme_iov_md": false 00:18:02.214 }, 00:18:02.214 "memory_domains": [ 00:18:02.214 { 00:18:02.214 "dma_device_id": "system", 00:18:02.214 "dma_device_type": 1 00:18:02.214 }, 00:18:02.214 { 00:18:02.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.214 "dma_device_type": 2 00:18:02.214 }, 00:18:02.214 { 00:18:02.214 "dma_device_id": "system", 00:18:02.214 "dma_device_type": 1 00:18:02.214 }, 00:18:02.214 { 00:18:02.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.214 "dma_device_type": 2 00:18:02.214 }, 00:18:02.214 { 00:18:02.214 "dma_device_id": "system", 00:18:02.214 "dma_device_type": 1 00:18:02.214 }, 00:18:02.214 { 00:18:02.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.214 "dma_device_type": 2 00:18:02.214 }, 00:18:02.214 { 00:18:02.214 "dma_device_id": "system", 00:18:02.214 "dma_device_type": 1 00:18:02.214 }, 00:18:02.214 { 00:18:02.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.214 "dma_device_type": 2 00:18:02.214 } 00:18:02.214 ], 00:18:02.214 "driver_specific": { 00:18:02.214 "raid": { 00:18:02.214 "uuid": "fac07b6b-a2e8-4426-aaaa-e3affa912c6e", 00:18:02.214 "strip_size_kb": 0, 00:18:02.214 "state": "online", 00:18:02.214 "raid_level": "raid1", 00:18:02.214 "superblock": true, 00:18:02.214 "num_base_bdevs": 4, 00:18:02.214 "num_base_bdevs_discovered": 4, 00:18:02.214 "num_base_bdevs_operational": 4, 00:18:02.214 "base_bdevs_list": [ 00:18:02.214 { 00:18:02.214 "name": "pt1", 00:18:02.214 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:02.214 "is_configured": true, 00:18:02.214 "data_offset": 2048, 00:18:02.214 "data_size": 63488 00:18:02.214 }, 00:18:02.214 { 00:18:02.214 "name": "pt2", 00:18:02.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.214 "is_configured": true, 00:18:02.214 "data_offset": 2048, 00:18:02.214 "data_size": 63488 00:18:02.214 }, 00:18:02.214 { 00:18:02.214 "name": "pt3", 00:18:02.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:02.214 "is_configured": true, 00:18:02.214 "data_offset": 2048, 00:18:02.214 "data_size": 63488 00:18:02.214 }, 00:18:02.214 { 00:18:02.214 "name": "pt4", 00:18:02.214 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:02.214 "is_configured": true, 00:18:02.214 "data_offset": 2048, 00:18:02.214 "data_size": 63488 00:18:02.214 } 00:18:02.214 ] 00:18:02.214 } 00:18:02.214 } 00:18:02.214 }' 00:18:02.214 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:02.214 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:02.214 pt2 00:18:02.214 pt3 00:18:02.214 pt4' 00:18:02.214 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.214 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:02.214 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.214 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.214 09:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.214 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.472 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.472 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.472 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:02.473 [2024-10-15 09:17:46.220769] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fac07b6b-a2e8-4426-aaaa-e3affa912c6e 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fac07b6b-a2e8-4426-aaaa-e3affa912c6e ']' 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.473 [2024-10-15 09:17:46.272414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.473 [2024-10-15 09:17:46.272592] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.473 [2024-10-15 09:17:46.272730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.473 [2024-10-15 09:17:46.272848] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.473 [2024-10-15 09:17:46.272873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.473 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.732 [2024-10-15 09:17:46.428482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:02.732 [2024-10-15 09:17:46.431143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:02.732 [2024-10-15 09:17:46.431347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:02.732 [2024-10-15 09:17:46.431418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:02.732 [2024-10-15 09:17:46.431501] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:02.732 [2024-10-15 09:17:46.431581] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:02.732 [2024-10-15 09:17:46.431615] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:02.732 [2024-10-15 09:17:46.431646] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:02.732 [2024-10-15 09:17:46.431668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.732 [2024-10-15 09:17:46.431685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:02.732 request: 00:18:02.732 { 00:18:02.732 "name": "raid_bdev1", 00:18:02.732 "raid_level": "raid1", 00:18:02.732 "base_bdevs": [ 00:18:02.732 "malloc1", 00:18:02.732 "malloc2", 00:18:02.732 "malloc3", 00:18:02.732 "malloc4" 00:18:02.732 ], 00:18:02.732 "superblock": false, 00:18:02.732 "method": "bdev_raid_create", 00:18:02.732 "req_id": 1 00:18:02.732 } 00:18:02.732 Got JSON-RPC error response 00:18:02.732 response: 00:18:02.732 { 00:18:02.732 "code": -17, 00:18:02.732 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:02.732 } 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.732 [2024-10-15 09:17:46.504577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:02.732 [2024-10-15 09:17:46.504791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.732 [2024-10-15 09:17:46.504956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:02.732 [2024-10-15 09:17:46.505092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.732 [2024-10-15 09:17:46.508299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.732 [2024-10-15 09:17:46.508467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:02.732 [2024-10-15 09:17:46.508599] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:02.732 [2024-10-15 09:17:46.508680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:02.732 pt1 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.732 "name": "raid_bdev1", 00:18:02.732 "uuid": "fac07b6b-a2e8-4426-aaaa-e3affa912c6e", 00:18:02.732 "strip_size_kb": 0, 00:18:02.732 "state": "configuring", 00:18:02.732 "raid_level": "raid1", 00:18:02.732 "superblock": true, 00:18:02.732 "num_base_bdevs": 4, 00:18:02.732 "num_base_bdevs_discovered": 1, 00:18:02.732 "num_base_bdevs_operational": 4, 00:18:02.732 "base_bdevs_list": [ 00:18:02.732 { 00:18:02.732 "name": "pt1", 00:18:02.732 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:02.732 "is_configured": true, 00:18:02.732 "data_offset": 2048, 00:18:02.732 "data_size": 63488 00:18:02.732 }, 00:18:02.732 { 00:18:02.732 "name": null, 00:18:02.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.732 "is_configured": false, 00:18:02.732 "data_offset": 2048, 00:18:02.732 "data_size": 63488 00:18:02.732 }, 00:18:02.732 { 00:18:02.732 "name": null, 00:18:02.732 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:02.732 "is_configured": false, 00:18:02.732 "data_offset": 2048, 00:18:02.732 "data_size": 63488 00:18:02.732 }, 00:18:02.732 { 00:18:02.732 "name": null, 00:18:02.732 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:02.732 "is_configured": false, 00:18:02.732 "data_offset": 2048, 00:18:02.732 "data_size": 63488 00:18:02.732 } 00:18:02.732 ] 00:18:02.732 }' 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.732 09:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.301 [2024-10-15 09:17:47.032814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:03.301 [2024-10-15 09:17:47.032921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.301 [2024-10-15 09:17:47.032953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:03.301 [2024-10-15 09:17:47.032972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.301 [2024-10-15 09:17:47.033633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.301 [2024-10-15 09:17:47.033669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:03.301 [2024-10-15 09:17:47.033779] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:03.301 [2024-10-15 09:17:47.033825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:03.301 pt2 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.301 [2024-10-15 09:17:47.040799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.301 "name": "raid_bdev1", 00:18:03.301 "uuid": "fac07b6b-a2e8-4426-aaaa-e3affa912c6e", 00:18:03.301 "strip_size_kb": 0, 00:18:03.301 "state": "configuring", 00:18:03.301 "raid_level": "raid1", 00:18:03.301 "superblock": true, 00:18:03.301 "num_base_bdevs": 4, 00:18:03.301 "num_base_bdevs_discovered": 1, 00:18:03.301 "num_base_bdevs_operational": 4, 00:18:03.301 "base_bdevs_list": [ 00:18:03.301 { 00:18:03.301 "name": "pt1", 00:18:03.301 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:03.301 "is_configured": true, 00:18:03.301 "data_offset": 2048, 00:18:03.301 "data_size": 63488 00:18:03.301 }, 00:18:03.301 { 00:18:03.301 "name": null, 00:18:03.301 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.301 "is_configured": false, 00:18:03.301 "data_offset": 0, 00:18:03.301 "data_size": 63488 00:18:03.301 }, 00:18:03.301 { 00:18:03.301 "name": null, 00:18:03.301 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:03.301 "is_configured": false, 00:18:03.301 "data_offset": 2048, 00:18:03.301 "data_size": 63488 00:18:03.301 }, 00:18:03.301 { 00:18:03.301 "name": null, 00:18:03.301 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:03.301 "is_configured": false, 00:18:03.301 "data_offset": 2048, 00:18:03.301 "data_size": 63488 00:18:03.301 } 00:18:03.301 ] 00:18:03.301 }' 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.301 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.868 [2024-10-15 09:17:47.576985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:03.868 [2024-10-15 09:17:47.577071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.868 [2024-10-15 09:17:47.577113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:03.868 [2024-10-15 09:17:47.577151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.868 [2024-10-15 09:17:47.577769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.868 [2024-10-15 09:17:47.577800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:03.868 [2024-10-15 09:17:47.577918] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:03.868 [2024-10-15 09:17:47.577953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:03.868 pt2 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.868 [2024-10-15 09:17:47.584931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:03.868 [2024-10-15 09:17:47.584997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.868 [2024-10-15 09:17:47.585028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:03.868 [2024-10-15 09:17:47.585042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.868 [2024-10-15 09:17:47.585515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.868 [2024-10-15 09:17:47.585546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:03.868 [2024-10-15 09:17:47.585626] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:03.868 [2024-10-15 09:17:47.585653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:03.868 pt3 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.868 [2024-10-15 09:17:47.592896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:03.868 [2024-10-15 09:17:47.593087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.868 [2024-10-15 09:17:47.593142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:03.868 [2024-10-15 09:17:47.593161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.868 [2024-10-15 09:17:47.593621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.868 [2024-10-15 09:17:47.593652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:03.868 [2024-10-15 09:17:47.593731] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:03.868 [2024-10-15 09:17:47.593758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:03.868 [2024-10-15 09:17:47.593943] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:03.868 [2024-10-15 09:17:47.593966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:03.868 [2024-10-15 09:17:47.594328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:03.868 [2024-10-15 09:17:47.594532] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:03.868 [2024-10-15 09:17:47.594554] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:03.868 [2024-10-15 09:17:47.594714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.868 pt4 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.868 "name": "raid_bdev1", 00:18:03.868 "uuid": "fac07b6b-a2e8-4426-aaaa-e3affa912c6e", 00:18:03.868 "strip_size_kb": 0, 00:18:03.868 "state": "online", 00:18:03.868 "raid_level": "raid1", 00:18:03.868 "superblock": true, 00:18:03.868 "num_base_bdevs": 4, 00:18:03.868 "num_base_bdevs_discovered": 4, 00:18:03.868 "num_base_bdevs_operational": 4, 00:18:03.868 "base_bdevs_list": [ 00:18:03.868 { 00:18:03.868 "name": "pt1", 00:18:03.868 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:03.868 "is_configured": true, 00:18:03.868 "data_offset": 2048, 00:18:03.868 "data_size": 63488 00:18:03.868 }, 00:18:03.868 { 00:18:03.868 "name": "pt2", 00:18:03.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.868 "is_configured": true, 00:18:03.868 "data_offset": 2048, 00:18:03.868 "data_size": 63488 00:18:03.868 }, 00:18:03.868 { 00:18:03.868 "name": "pt3", 00:18:03.868 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:03.868 "is_configured": true, 00:18:03.868 "data_offset": 2048, 00:18:03.868 "data_size": 63488 00:18:03.868 }, 00:18:03.868 { 00:18:03.868 "name": "pt4", 00:18:03.868 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:03.868 "is_configured": true, 00:18:03.868 "data_offset": 2048, 00:18:03.868 "data_size": 63488 00:18:03.868 } 00:18:03.868 ] 00:18:03.868 }' 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.868 09:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.436 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:04.436 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:04.436 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:04.436 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:04.436 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:04.436 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:04.436 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:04.436 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:04.436 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.436 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.436 [2024-10-15 09:17:48.117553] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.436 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.436 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:04.436 "name": "raid_bdev1", 00:18:04.436 "aliases": [ 00:18:04.436 "fac07b6b-a2e8-4426-aaaa-e3affa912c6e" 00:18:04.436 ], 00:18:04.436 "product_name": "Raid Volume", 00:18:04.436 "block_size": 512, 00:18:04.436 "num_blocks": 63488, 00:18:04.436 "uuid": "fac07b6b-a2e8-4426-aaaa-e3affa912c6e", 00:18:04.436 "assigned_rate_limits": { 00:18:04.436 "rw_ios_per_sec": 0, 00:18:04.437 "rw_mbytes_per_sec": 0, 00:18:04.437 "r_mbytes_per_sec": 0, 00:18:04.437 "w_mbytes_per_sec": 0 00:18:04.437 }, 00:18:04.437 "claimed": false, 00:18:04.437 "zoned": false, 00:18:04.437 "supported_io_types": { 00:18:04.437 "read": true, 00:18:04.437 "write": true, 00:18:04.437 "unmap": false, 00:18:04.437 "flush": false, 00:18:04.437 "reset": true, 00:18:04.437 "nvme_admin": false, 00:18:04.437 "nvme_io": false, 00:18:04.437 "nvme_io_md": false, 00:18:04.437 "write_zeroes": true, 00:18:04.437 "zcopy": false, 00:18:04.437 "get_zone_info": false, 00:18:04.437 "zone_management": false, 00:18:04.437 "zone_append": false, 00:18:04.437 "compare": false, 00:18:04.437 "compare_and_write": false, 00:18:04.437 "abort": false, 00:18:04.437 "seek_hole": false, 00:18:04.437 "seek_data": false, 00:18:04.437 "copy": false, 00:18:04.437 "nvme_iov_md": false 00:18:04.437 }, 00:18:04.437 "memory_domains": [ 00:18:04.437 { 00:18:04.437 "dma_device_id": "system", 00:18:04.437 "dma_device_type": 1 00:18:04.437 }, 00:18:04.437 { 00:18:04.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.437 "dma_device_type": 2 00:18:04.437 }, 00:18:04.437 { 00:18:04.437 "dma_device_id": "system", 00:18:04.437 "dma_device_type": 1 00:18:04.437 }, 00:18:04.437 { 00:18:04.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.437 "dma_device_type": 2 00:18:04.437 }, 00:18:04.437 { 00:18:04.437 "dma_device_id": "system", 00:18:04.437 "dma_device_type": 1 00:18:04.437 }, 00:18:04.437 { 00:18:04.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.437 "dma_device_type": 2 00:18:04.437 }, 00:18:04.437 { 00:18:04.437 "dma_device_id": "system", 00:18:04.437 "dma_device_type": 1 00:18:04.437 }, 00:18:04.437 { 00:18:04.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.437 "dma_device_type": 2 00:18:04.437 } 00:18:04.437 ], 00:18:04.437 "driver_specific": { 00:18:04.437 "raid": { 00:18:04.437 "uuid": "fac07b6b-a2e8-4426-aaaa-e3affa912c6e", 00:18:04.437 "strip_size_kb": 0, 00:18:04.437 "state": "online", 00:18:04.437 "raid_level": "raid1", 00:18:04.437 "superblock": true, 00:18:04.437 "num_base_bdevs": 4, 00:18:04.437 "num_base_bdevs_discovered": 4, 00:18:04.437 "num_base_bdevs_operational": 4, 00:18:04.437 "base_bdevs_list": [ 00:18:04.437 { 00:18:04.437 "name": "pt1", 00:18:04.437 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:04.437 "is_configured": true, 00:18:04.437 "data_offset": 2048, 00:18:04.437 "data_size": 63488 00:18:04.437 }, 00:18:04.437 { 00:18:04.437 "name": "pt2", 00:18:04.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.437 "is_configured": true, 00:18:04.437 "data_offset": 2048, 00:18:04.437 "data_size": 63488 00:18:04.437 }, 00:18:04.437 { 00:18:04.437 "name": "pt3", 00:18:04.437 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:04.437 "is_configured": true, 00:18:04.437 "data_offset": 2048, 00:18:04.437 "data_size": 63488 00:18:04.437 }, 00:18:04.437 { 00:18:04.437 "name": "pt4", 00:18:04.437 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:04.437 "is_configured": true, 00:18:04.437 "data_offset": 2048, 00:18:04.437 "data_size": 63488 00:18:04.437 } 00:18:04.437 ] 00:18:04.437 } 00:18:04.437 } 00:18:04.437 }' 00:18:04.437 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:04.437 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:04.437 pt2 00:18:04.437 pt3 00:18:04.437 pt4' 00:18:04.437 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.437 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:04.437 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:04.437 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:04.437 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.437 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.437 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.437 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.437 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:04.437 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:04.437 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:04.437 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:04.437 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.437 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.437 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.437 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:04.705 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:04.705 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:04.705 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.706 [2024-10-15 09:17:48.521596] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fac07b6b-a2e8-4426-aaaa-e3affa912c6e '!=' fac07b6b-a2e8-4426-aaaa-e3affa912c6e ']' 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.706 [2024-10-15 09:17:48.573286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.706 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.013 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.013 "name": "raid_bdev1", 00:18:05.013 "uuid": "fac07b6b-a2e8-4426-aaaa-e3affa912c6e", 00:18:05.013 "strip_size_kb": 0, 00:18:05.013 "state": "online", 00:18:05.013 "raid_level": "raid1", 00:18:05.013 "superblock": true, 00:18:05.013 "num_base_bdevs": 4, 00:18:05.013 "num_base_bdevs_discovered": 3, 00:18:05.013 "num_base_bdevs_operational": 3, 00:18:05.013 "base_bdevs_list": [ 00:18:05.013 { 00:18:05.013 "name": null, 00:18:05.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.013 "is_configured": false, 00:18:05.013 "data_offset": 0, 00:18:05.013 "data_size": 63488 00:18:05.013 }, 00:18:05.013 { 00:18:05.013 "name": "pt2", 00:18:05.013 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.013 "is_configured": true, 00:18:05.013 "data_offset": 2048, 00:18:05.013 "data_size": 63488 00:18:05.013 }, 00:18:05.013 { 00:18:05.013 "name": "pt3", 00:18:05.013 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:05.013 "is_configured": true, 00:18:05.014 "data_offset": 2048, 00:18:05.014 "data_size": 63488 00:18:05.014 }, 00:18:05.014 { 00:18:05.014 "name": "pt4", 00:18:05.014 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:05.014 "is_configured": true, 00:18:05.014 "data_offset": 2048, 00:18:05.014 "data_size": 63488 00:18:05.014 } 00:18:05.014 ] 00:18:05.014 }' 00:18:05.014 09:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.014 09:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.272 [2024-10-15 09:17:49.105378] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:05.272 [2024-10-15 09:17:49.105422] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:05.272 [2024-10-15 09:17:49.105560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.272 [2024-10-15 09:17:49.105669] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.272 [2024-10-15 09:17:49.105686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.272 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.273 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.273 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:05.273 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:05.273 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:05.273 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:05.273 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:05.273 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.273 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.532 [2024-10-15 09:17:49.201353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:05.532 [2024-10-15 09:17:49.201436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.532 [2024-10-15 09:17:49.201472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:05.532 [2024-10-15 09:17:49.201488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.532 [2024-10-15 09:17:49.204784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.532 [2024-10-15 09:17:49.204828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:05.532 [2024-10-15 09:17:49.204946] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:05.532 [2024-10-15 09:17:49.205009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:05.532 pt2 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.532 "name": "raid_bdev1", 00:18:05.532 "uuid": "fac07b6b-a2e8-4426-aaaa-e3affa912c6e", 00:18:05.532 "strip_size_kb": 0, 00:18:05.532 "state": "configuring", 00:18:05.532 "raid_level": "raid1", 00:18:05.532 "superblock": true, 00:18:05.532 "num_base_bdevs": 4, 00:18:05.532 "num_base_bdevs_discovered": 1, 00:18:05.532 "num_base_bdevs_operational": 3, 00:18:05.532 "base_bdevs_list": [ 00:18:05.532 { 00:18:05.532 "name": null, 00:18:05.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.532 "is_configured": false, 00:18:05.532 "data_offset": 2048, 00:18:05.532 "data_size": 63488 00:18:05.532 }, 00:18:05.532 { 00:18:05.532 "name": "pt2", 00:18:05.532 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.532 "is_configured": true, 00:18:05.532 "data_offset": 2048, 00:18:05.532 "data_size": 63488 00:18:05.532 }, 00:18:05.532 { 00:18:05.532 "name": null, 00:18:05.532 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:05.532 "is_configured": false, 00:18:05.532 "data_offset": 2048, 00:18:05.532 "data_size": 63488 00:18:05.532 }, 00:18:05.532 { 00:18:05.532 "name": null, 00:18:05.532 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:05.532 "is_configured": false, 00:18:05.532 "data_offset": 2048, 00:18:05.532 "data_size": 63488 00:18:05.532 } 00:18:05.532 ] 00:18:05.532 }' 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.532 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.792 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:05.792 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:05.792 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:05.792 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.792 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.792 [2024-10-15 09:17:49.717554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:05.792 [2024-10-15 09:17:49.717651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.792 [2024-10-15 09:17:49.717693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:05.792 [2024-10-15 09:17:49.717709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.792 [2024-10-15 09:17:49.718393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.792 [2024-10-15 09:17:49.718424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:05.792 [2024-10-15 09:17:49.718545] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:05.792 [2024-10-15 09:17:49.718587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:06.051 pt3 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.051 "name": "raid_bdev1", 00:18:06.051 "uuid": "fac07b6b-a2e8-4426-aaaa-e3affa912c6e", 00:18:06.051 "strip_size_kb": 0, 00:18:06.051 "state": "configuring", 00:18:06.051 "raid_level": "raid1", 00:18:06.051 "superblock": true, 00:18:06.051 "num_base_bdevs": 4, 00:18:06.051 "num_base_bdevs_discovered": 2, 00:18:06.051 "num_base_bdevs_operational": 3, 00:18:06.051 "base_bdevs_list": [ 00:18:06.051 { 00:18:06.051 "name": null, 00:18:06.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.051 "is_configured": false, 00:18:06.051 "data_offset": 2048, 00:18:06.051 "data_size": 63488 00:18:06.051 }, 00:18:06.051 { 00:18:06.051 "name": "pt2", 00:18:06.051 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.051 "is_configured": true, 00:18:06.051 "data_offset": 2048, 00:18:06.051 "data_size": 63488 00:18:06.051 }, 00:18:06.051 { 00:18:06.051 "name": "pt3", 00:18:06.051 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:06.051 "is_configured": true, 00:18:06.051 "data_offset": 2048, 00:18:06.051 "data_size": 63488 00:18:06.051 }, 00:18:06.051 { 00:18:06.051 "name": null, 00:18:06.051 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:06.051 "is_configured": false, 00:18:06.051 "data_offset": 2048, 00:18:06.051 "data_size": 63488 00:18:06.051 } 00:18:06.051 ] 00:18:06.051 }' 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.051 09:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.310 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:06.310 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:06.310 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:18:06.310 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:06.310 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.310 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.310 [2024-10-15 09:17:50.229717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:06.310 [2024-10-15 09:17:50.229832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.310 [2024-10-15 09:17:50.229874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:06.310 [2024-10-15 09:17:50.229889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.310 [2024-10-15 09:17:50.230574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.310 [2024-10-15 09:17:50.230743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:06.310 [2024-10-15 09:17:50.230883] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:06.310 [2024-10-15 09:17:50.230929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:06.310 [2024-10-15 09:17:50.231136] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:06.311 [2024-10-15 09:17:50.231154] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:06.311 [2024-10-15 09:17:50.231513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:06.311 [2024-10-15 09:17:50.231702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:06.311 [2024-10-15 09:17:50.231723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:06.311 [2024-10-15 09:17:50.231908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.311 pt4 00:18:06.311 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.311 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:06.311 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.311 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.311 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.311 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.311 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:06.311 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.311 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.311 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.311 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.569 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.569 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.569 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.569 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.569 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.569 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.569 "name": "raid_bdev1", 00:18:06.569 "uuid": "fac07b6b-a2e8-4426-aaaa-e3affa912c6e", 00:18:06.569 "strip_size_kb": 0, 00:18:06.569 "state": "online", 00:18:06.569 "raid_level": "raid1", 00:18:06.569 "superblock": true, 00:18:06.569 "num_base_bdevs": 4, 00:18:06.569 "num_base_bdevs_discovered": 3, 00:18:06.569 "num_base_bdevs_operational": 3, 00:18:06.569 "base_bdevs_list": [ 00:18:06.569 { 00:18:06.569 "name": null, 00:18:06.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.569 "is_configured": false, 00:18:06.569 "data_offset": 2048, 00:18:06.569 "data_size": 63488 00:18:06.569 }, 00:18:06.569 { 00:18:06.569 "name": "pt2", 00:18:06.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.569 "is_configured": true, 00:18:06.569 "data_offset": 2048, 00:18:06.569 "data_size": 63488 00:18:06.569 }, 00:18:06.569 { 00:18:06.569 "name": "pt3", 00:18:06.569 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:06.569 "is_configured": true, 00:18:06.569 "data_offset": 2048, 00:18:06.569 "data_size": 63488 00:18:06.569 }, 00:18:06.569 { 00:18:06.569 "name": "pt4", 00:18:06.569 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:06.569 "is_configured": true, 00:18:06.569 "data_offset": 2048, 00:18:06.569 "data_size": 63488 00:18:06.569 } 00:18:06.569 ] 00:18:06.569 }' 00:18:06.569 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.569 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.828 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:06.828 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.828 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.828 [2024-10-15 09:17:50.745826] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:06.828 [2024-10-15 09:17:50.745994] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:06.828 [2024-10-15 09:17:50.746260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.828 [2024-10-15 09:17:50.746488] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.828 [2024-10-15 09:17:50.746650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:06.828 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.828 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:06.828 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.828 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.828 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.087 [2024-10-15 09:17:50.809830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:07.087 [2024-10-15 09:17:50.809914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.087 [2024-10-15 09:17:50.809944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:07.087 [2024-10-15 09:17:50.809963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.087 [2024-10-15 09:17:50.813366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.087 [2024-10-15 09:17:50.813417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:07.087 [2024-10-15 09:17:50.813534] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:07.087 [2024-10-15 09:17:50.813602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:07.087 [2024-10-15 09:17:50.813813] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:07.087 [2024-10-15 09:17:50.813835] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.087 [2024-10-15 09:17:50.813859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:07.087 [2024-10-15 09:17:50.813950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:07.087 [2024-10-15 09:17:50.814180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:07.087 pt1 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.087 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.088 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.088 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.088 "name": "raid_bdev1", 00:18:07.088 "uuid": "fac07b6b-a2e8-4426-aaaa-e3affa912c6e", 00:18:07.088 "strip_size_kb": 0, 00:18:07.088 "state": "configuring", 00:18:07.088 "raid_level": "raid1", 00:18:07.088 "superblock": true, 00:18:07.088 "num_base_bdevs": 4, 00:18:07.088 "num_base_bdevs_discovered": 2, 00:18:07.088 "num_base_bdevs_operational": 3, 00:18:07.088 "base_bdevs_list": [ 00:18:07.088 { 00:18:07.088 "name": null, 00:18:07.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.088 "is_configured": false, 00:18:07.088 "data_offset": 2048, 00:18:07.088 "data_size": 63488 00:18:07.088 }, 00:18:07.088 { 00:18:07.088 "name": "pt2", 00:18:07.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.088 "is_configured": true, 00:18:07.088 "data_offset": 2048, 00:18:07.088 "data_size": 63488 00:18:07.088 }, 00:18:07.088 { 00:18:07.088 "name": "pt3", 00:18:07.088 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:07.088 "is_configured": true, 00:18:07.088 "data_offset": 2048, 00:18:07.088 "data_size": 63488 00:18:07.088 }, 00:18:07.088 { 00:18:07.088 "name": null, 00:18:07.088 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:07.088 "is_configured": false, 00:18:07.088 "data_offset": 2048, 00:18:07.088 "data_size": 63488 00:18:07.088 } 00:18:07.088 ] 00:18:07.088 }' 00:18:07.088 09:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.088 09:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.705 [2024-10-15 09:17:51.370157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:07.705 [2024-10-15 09:17:51.370244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.705 [2024-10-15 09:17:51.370282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:07.705 [2024-10-15 09:17:51.370298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.705 [2024-10-15 09:17:51.370908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.705 [2024-10-15 09:17:51.370933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:07.705 [2024-10-15 09:17:51.371045] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:07.705 [2024-10-15 09:17:51.371078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:07.705 [2024-10-15 09:17:51.371460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:07.705 [2024-10-15 09:17:51.371599] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:07.705 [2024-10-15 09:17:51.371979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:07.705 [2024-10-15 09:17:51.372297] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:07.705 [2024-10-15 09:17:51.372438] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:07.705 [2024-10-15 09:17:51.372772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.705 pt4 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.705 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.705 "name": "raid_bdev1", 00:18:07.705 "uuid": "fac07b6b-a2e8-4426-aaaa-e3affa912c6e", 00:18:07.705 "strip_size_kb": 0, 00:18:07.705 "state": "online", 00:18:07.705 "raid_level": "raid1", 00:18:07.705 "superblock": true, 00:18:07.705 "num_base_bdevs": 4, 00:18:07.705 "num_base_bdevs_discovered": 3, 00:18:07.705 "num_base_bdevs_operational": 3, 00:18:07.706 "base_bdevs_list": [ 00:18:07.706 { 00:18:07.706 "name": null, 00:18:07.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.706 "is_configured": false, 00:18:07.706 "data_offset": 2048, 00:18:07.706 "data_size": 63488 00:18:07.706 }, 00:18:07.706 { 00:18:07.706 "name": "pt2", 00:18:07.706 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.706 "is_configured": true, 00:18:07.706 "data_offset": 2048, 00:18:07.706 "data_size": 63488 00:18:07.706 }, 00:18:07.706 { 00:18:07.706 "name": "pt3", 00:18:07.706 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:07.706 "is_configured": true, 00:18:07.706 "data_offset": 2048, 00:18:07.706 "data_size": 63488 00:18:07.706 }, 00:18:07.706 { 00:18:07.706 "name": "pt4", 00:18:07.706 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:07.706 "is_configured": true, 00:18:07.706 "data_offset": 2048, 00:18:07.706 "data_size": 63488 00:18:07.706 } 00:18:07.706 ] 00:18:07.706 }' 00:18:07.706 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.706 09:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.273 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:08.273 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:08.273 09:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.273 09:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.273 09:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.273 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:08.273 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:08.273 09:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.273 09:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.273 09:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:08.273 [2024-10-15 09:17:51.962667] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:08.273 09:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.273 09:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' fac07b6b-a2e8-4426-aaaa-e3affa912c6e '!=' fac07b6b-a2e8-4426-aaaa-e3affa912c6e ']' 00:18:08.273 09:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74931 00:18:08.273 09:17:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74931 ']' 00:18:08.273 09:17:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74931 00:18:08.273 09:17:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:18:08.273 09:17:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:08.273 09:17:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74931 00:18:08.273 killing process with pid 74931 00:18:08.273 09:17:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:08.273 09:17:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:08.273 09:17:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74931' 00:18:08.273 09:17:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74931 00:18:08.273 09:17:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74931 00:18:08.273 [2024-10-15 09:17:52.042384] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:08.273 [2024-10-15 09:17:52.042551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.273 [2024-10-15 09:17:52.042674] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.273 [2024-10-15 09:17:52.042745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:08.532 [2024-10-15 09:17:52.437722] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:09.909 09:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:09.909 00:18:09.909 real 0m9.525s 00:18:09.909 user 0m15.456s 00:18:09.909 sys 0m1.466s 00:18:09.909 09:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:09.909 09:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.909 ************************************ 00:18:09.909 END TEST raid_superblock_test 00:18:09.909 ************************************ 00:18:09.909 09:17:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:18:09.909 09:17:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:09.909 09:17:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:09.909 09:17:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:09.909 ************************************ 00:18:09.909 START TEST raid_read_error_test 00:18:09.909 ************************************ 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:09.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xdKhUMsC9X 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75429 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75429 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 75429 ']' 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:09.909 09:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.909 [2024-10-15 09:17:53.754943] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:18:09.909 [2024-10-15 09:17:53.755163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75429 ] 00:18:10.168 [2024-10-15 09:17:53.936758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.168 [2024-10-15 09:17:54.088677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.426 [2024-10-15 09:17:54.310154] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.426 [2024-10-15 09:17:54.310216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.994 BaseBdev1_malloc 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.994 true 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.994 [2024-10-15 09:17:54.830842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:10.994 [2024-10-15 09:17:54.830912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.994 [2024-10-15 09:17:54.830942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:10.994 [2024-10-15 09:17:54.830962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.994 [2024-10-15 09:17:54.833953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.994 [2024-10-15 09:17:54.833998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:10.994 BaseBdev1 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.994 BaseBdev2_malloc 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.994 true 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.994 [2024-10-15 09:17:54.893056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:10.994 [2024-10-15 09:17:54.893148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.994 [2024-10-15 09:17:54.893175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:10.994 [2024-10-15 09:17:54.893192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.994 [2024-10-15 09:17:54.896381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.994 [2024-10-15 09:17:54.896438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:10.994 BaseBdev2 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.994 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.253 BaseBdev3_malloc 00:18:11.253 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.253 09:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:11.253 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.253 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.253 true 00:18:11.253 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.253 09:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:11.253 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.253 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.253 [2024-10-15 09:17:54.972666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:11.253 [2024-10-15 09:17:54.972749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.253 [2024-10-15 09:17:54.972777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:11.253 [2024-10-15 09:17:54.972796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.253 [2024-10-15 09:17:54.975849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.253 [2024-10-15 09:17:54.975894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:11.253 BaseBdev3 00:18:11.253 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.253 09:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:11.253 09:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:11.253 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.253 09:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.253 BaseBdev4_malloc 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.253 true 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.253 [2024-10-15 09:17:55.036914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:11.253 [2024-10-15 09:17:55.036981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.253 [2024-10-15 09:17:55.037010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:11.253 [2024-10-15 09:17:55.037031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.253 [2024-10-15 09:17:55.039964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.253 [2024-10-15 09:17:55.040014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:11.253 BaseBdev4 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.253 [2024-10-15 09:17:55.045004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:11.253 [2024-10-15 09:17:55.047675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:11.253 [2024-10-15 09:17:55.047825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:11.253 [2024-10-15 09:17:55.047923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:11.253 [2024-10-15 09:17:55.048250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:18:11.253 [2024-10-15 09:17:55.048285] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:11.253 [2024-10-15 09:17:55.048597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:11.253 [2024-10-15 09:17:55.048839] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:18:11.253 [2024-10-15 09:17:55.048865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:18:11.253 [2024-10-15 09:17:55.049117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.253 "name": "raid_bdev1", 00:18:11.253 "uuid": "a6a21815-71e3-4d04-9f75-b0de50373883", 00:18:11.253 "strip_size_kb": 0, 00:18:11.253 "state": "online", 00:18:11.253 "raid_level": "raid1", 00:18:11.253 "superblock": true, 00:18:11.253 "num_base_bdevs": 4, 00:18:11.253 "num_base_bdevs_discovered": 4, 00:18:11.253 "num_base_bdevs_operational": 4, 00:18:11.253 "base_bdevs_list": [ 00:18:11.253 { 00:18:11.253 "name": "BaseBdev1", 00:18:11.253 "uuid": "821862d3-40ab-5916-b464-6922d74dbc50", 00:18:11.253 "is_configured": true, 00:18:11.253 "data_offset": 2048, 00:18:11.253 "data_size": 63488 00:18:11.253 }, 00:18:11.253 { 00:18:11.253 "name": "BaseBdev2", 00:18:11.253 "uuid": "61c49941-5da2-5a93-b744-718808dc1945", 00:18:11.253 "is_configured": true, 00:18:11.253 "data_offset": 2048, 00:18:11.253 "data_size": 63488 00:18:11.253 }, 00:18:11.253 { 00:18:11.253 "name": "BaseBdev3", 00:18:11.253 "uuid": "b20078be-6e5d-57df-98af-ae70999a56ad", 00:18:11.253 "is_configured": true, 00:18:11.253 "data_offset": 2048, 00:18:11.253 "data_size": 63488 00:18:11.253 }, 00:18:11.253 { 00:18:11.253 "name": "BaseBdev4", 00:18:11.253 "uuid": "8090dd06-bfa5-5ef6-8a42-993e1511a0d6", 00:18:11.253 "is_configured": true, 00:18:11.253 "data_offset": 2048, 00:18:11.253 "data_size": 63488 00:18:11.253 } 00:18:11.253 ] 00:18:11.253 }' 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.253 09:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.820 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:11.820 09:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:11.820 [2024-10-15 09:17:55.682930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:12.754 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:12.754 09:17:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.754 09:17:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.754 09:17:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.754 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:12.754 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:12.754 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:18:12.754 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:18:12.755 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:12.755 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.755 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.755 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.755 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.755 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:12.755 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.755 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.755 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.755 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.755 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.755 09:17:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.755 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.755 09:17:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.755 09:17:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.755 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.755 "name": "raid_bdev1", 00:18:12.755 "uuid": "a6a21815-71e3-4d04-9f75-b0de50373883", 00:18:12.755 "strip_size_kb": 0, 00:18:12.755 "state": "online", 00:18:12.755 "raid_level": "raid1", 00:18:12.755 "superblock": true, 00:18:12.755 "num_base_bdevs": 4, 00:18:12.755 "num_base_bdevs_discovered": 4, 00:18:12.755 "num_base_bdevs_operational": 4, 00:18:12.755 "base_bdevs_list": [ 00:18:12.755 { 00:18:12.755 "name": "BaseBdev1", 00:18:12.755 "uuid": "821862d3-40ab-5916-b464-6922d74dbc50", 00:18:12.755 "is_configured": true, 00:18:12.755 "data_offset": 2048, 00:18:12.755 "data_size": 63488 00:18:12.755 }, 00:18:12.755 { 00:18:12.755 "name": "BaseBdev2", 00:18:12.755 "uuid": "61c49941-5da2-5a93-b744-718808dc1945", 00:18:12.755 "is_configured": true, 00:18:12.755 "data_offset": 2048, 00:18:12.755 "data_size": 63488 00:18:12.755 }, 00:18:12.755 { 00:18:12.755 "name": "BaseBdev3", 00:18:12.755 "uuid": "b20078be-6e5d-57df-98af-ae70999a56ad", 00:18:12.755 "is_configured": true, 00:18:12.755 "data_offset": 2048, 00:18:12.755 "data_size": 63488 00:18:12.755 }, 00:18:12.755 { 00:18:12.755 "name": "BaseBdev4", 00:18:12.755 "uuid": "8090dd06-bfa5-5ef6-8a42-993e1511a0d6", 00:18:12.755 "is_configured": true, 00:18:12.755 "data_offset": 2048, 00:18:12.755 "data_size": 63488 00:18:12.755 } 00:18:12.755 ] 00:18:12.755 }' 00:18:12.755 09:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.755 09:17:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.321 09:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:13.321 09:17:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.321 09:17:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.321 [2024-10-15 09:17:57.115672] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.321 [2024-10-15 09:17:57.115730] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:13.321 [2024-10-15 09:17:57.119235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.321 [2024-10-15 09:17:57.119319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.321 [2024-10-15 09:17:57.119502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.321 [2024-10-15 09:17:57.119523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:18:13.321 { 00:18:13.321 "results": [ 00:18:13.321 { 00:18:13.321 "job": "raid_bdev1", 00:18:13.321 "core_mask": "0x1", 00:18:13.321 "workload": "randrw", 00:18:13.321 "percentage": 50, 00:18:13.321 "status": "finished", 00:18:13.321 "queue_depth": 1, 00:18:13.321 "io_size": 131072, 00:18:13.321 "runtime": 1.429773, 00:18:13.321 "iops": 6536.002568239854, 00:18:13.321 "mibps": 817.0003210299817, 00:18:13.321 "io_failed": 0, 00:18:13.321 "io_timeout": 0, 00:18:13.321 "avg_latency_us": 148.8391770027725, 00:18:13.321 "min_latency_us": 40.72727272727273, 00:18:13.321 "max_latency_us": 2085.2363636363634 00:18:13.321 } 00:18:13.321 ], 00:18:13.321 "core_count": 1 00:18:13.321 } 00:18:13.321 09:17:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.321 09:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75429 00:18:13.321 09:17:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 75429 ']' 00:18:13.321 09:17:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 75429 00:18:13.321 09:17:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:18:13.321 09:17:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:13.321 09:17:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75429 00:18:13.321 killing process with pid 75429 00:18:13.321 09:17:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:13.321 09:17:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:13.321 09:17:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75429' 00:18:13.321 09:17:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 75429 00:18:13.321 [2024-10-15 09:17:57.155779] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:13.321 09:17:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 75429 00:18:13.580 [2024-10-15 09:17:57.474460] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:14.956 09:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:14.956 09:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xdKhUMsC9X 00:18:14.956 09:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:14.956 09:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:18:14.956 09:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:18:14.956 09:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:14.956 09:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:14.956 09:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:14.956 00:18:14.956 real 0m5.051s 00:18:14.956 user 0m6.156s 00:18:14.956 sys 0m0.691s 00:18:14.956 09:17:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:14.956 09:17:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.956 ************************************ 00:18:14.956 END TEST raid_read_error_test 00:18:14.956 ************************************ 00:18:14.956 09:17:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:18:14.956 09:17:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:14.956 09:17:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:14.956 09:17:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:14.956 ************************************ 00:18:14.956 START TEST raid_write_error_test 00:18:14.956 ************************************ 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.x2Qaykmk0B 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75575 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75575 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75575 ']' 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:14.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:14.956 09:17:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.956 [2024-10-15 09:17:58.844385] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:18:14.956 [2024-10-15 09:17:58.844611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75575 ] 00:18:15.214 [2024-10-15 09:17:59.011759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.473 [2024-10-15 09:17:59.161838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.473 [2024-10-15 09:17:59.394797] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:15.473 [2024-10-15 09:17:59.394903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.039 09:17:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:16.039 09:17:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:16.039 09:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:16.039 09:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:16.039 09:17:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.039 09:17:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.039 BaseBdev1_malloc 00:18:16.039 09:17:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.039 09:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:16.039 09:17:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.039 09:17:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.039 true 00:18:16.039 09:17:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.039 09:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:16.039 09:17:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.039 09:17:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.039 [2024-10-15 09:17:59.941252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:16.039 [2024-10-15 09:17:59.941320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.039 [2024-10-15 09:17:59.941350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:16.039 [2024-10-15 09:17:59.941368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.040 [2024-10-15 09:17:59.944327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.040 [2024-10-15 09:17:59.944498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:16.040 BaseBdev1 00:18:16.040 09:17:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.040 09:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:16.040 09:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:16.040 09:17:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.040 09:17:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.299 BaseBdev2_malloc 00:18:16.299 09:17:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.299 09:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:16.299 09:17:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.299 09:17:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.299 true 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.299 [2024-10-15 09:18:00.005807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:16.299 [2024-10-15 09:18:00.006272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.299 [2024-10-15 09:18:00.006320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:16.299 [2024-10-15 09:18:00.006341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.299 [2024-10-15 09:18:00.009476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.299 [2024-10-15 09:18:00.009613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:16.299 BaseBdev2 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.299 BaseBdev3_malloc 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.299 true 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.299 [2024-10-15 09:18:00.098734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:16.299 [2024-10-15 09:18:00.099026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.299 [2024-10-15 09:18:00.099072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:16.299 [2024-10-15 09:18:00.099093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.299 [2024-10-15 09:18:00.102167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.299 [2024-10-15 09:18:00.102217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:16.299 BaseBdev3 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.299 BaseBdev4_malloc 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.299 true 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.299 [2024-10-15 09:18:00.166857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:16.299 [2024-10-15 09:18:00.166924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.299 [2024-10-15 09:18:00.166952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:16.299 [2024-10-15 09:18:00.166971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.299 [2024-10-15 09:18:00.169868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.299 [2024-10-15 09:18:00.169917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:16.299 BaseBdev4 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.299 [2024-10-15 09:18:00.174937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.299 [2024-10-15 09:18:00.177500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.299 [2024-10-15 09:18:00.177619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:16.299 [2024-10-15 09:18:00.177719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:16.299 [2024-10-15 09:18:00.178036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:18:16.299 [2024-10-15 09:18:00.178070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:16.299 [2024-10-15 09:18:00.178395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:16.299 [2024-10-15 09:18:00.178652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:18:16.299 [2024-10-15 09:18:00.178674] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:18:16.299 [2024-10-15 09:18:00.178912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.299 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.558 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.558 "name": "raid_bdev1", 00:18:16.558 "uuid": "835fab79-b794-4a3e-8d72-873590de1fdb", 00:18:16.558 "strip_size_kb": 0, 00:18:16.558 "state": "online", 00:18:16.558 "raid_level": "raid1", 00:18:16.558 "superblock": true, 00:18:16.558 "num_base_bdevs": 4, 00:18:16.558 "num_base_bdevs_discovered": 4, 00:18:16.558 "num_base_bdevs_operational": 4, 00:18:16.558 "base_bdevs_list": [ 00:18:16.558 { 00:18:16.558 "name": "BaseBdev1", 00:18:16.558 "uuid": "2ab9b470-a232-5910-8489-036234566531", 00:18:16.558 "is_configured": true, 00:18:16.558 "data_offset": 2048, 00:18:16.558 "data_size": 63488 00:18:16.558 }, 00:18:16.558 { 00:18:16.558 "name": "BaseBdev2", 00:18:16.558 "uuid": "81d1ecfe-028d-57c4-9c64-f44437c4fe59", 00:18:16.558 "is_configured": true, 00:18:16.558 "data_offset": 2048, 00:18:16.558 "data_size": 63488 00:18:16.558 }, 00:18:16.558 { 00:18:16.558 "name": "BaseBdev3", 00:18:16.558 "uuid": "2534433a-f5a1-578a-9480-cff992c117e7", 00:18:16.558 "is_configured": true, 00:18:16.558 "data_offset": 2048, 00:18:16.558 "data_size": 63488 00:18:16.558 }, 00:18:16.558 { 00:18:16.558 "name": "BaseBdev4", 00:18:16.558 "uuid": "4b2671bf-972f-5c04-88a5-43e15ded36a2", 00:18:16.558 "is_configured": true, 00:18:16.558 "data_offset": 2048, 00:18:16.558 "data_size": 63488 00:18:16.558 } 00:18:16.558 ] 00:18:16.558 }' 00:18:16.558 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.558 09:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.817 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:16.817 09:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:17.075 [2024-10-15 09:18:00.852683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.015 [2024-10-15 09:18:01.728097] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:18:18.015 [2024-10-15 09:18:01.728184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:18.015 [2024-10-15 09:18:01.728502] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.015 "name": "raid_bdev1", 00:18:18.015 "uuid": "835fab79-b794-4a3e-8d72-873590de1fdb", 00:18:18.015 "strip_size_kb": 0, 00:18:18.015 "state": "online", 00:18:18.015 "raid_level": "raid1", 00:18:18.015 "superblock": true, 00:18:18.015 "num_base_bdevs": 4, 00:18:18.015 "num_base_bdevs_discovered": 3, 00:18:18.015 "num_base_bdevs_operational": 3, 00:18:18.015 "base_bdevs_list": [ 00:18:18.015 { 00:18:18.015 "name": null, 00:18:18.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.015 "is_configured": false, 00:18:18.015 "data_offset": 0, 00:18:18.015 "data_size": 63488 00:18:18.015 }, 00:18:18.015 { 00:18:18.015 "name": "BaseBdev2", 00:18:18.015 "uuid": "81d1ecfe-028d-57c4-9c64-f44437c4fe59", 00:18:18.015 "is_configured": true, 00:18:18.015 "data_offset": 2048, 00:18:18.015 "data_size": 63488 00:18:18.015 }, 00:18:18.015 { 00:18:18.015 "name": "BaseBdev3", 00:18:18.015 "uuid": "2534433a-f5a1-578a-9480-cff992c117e7", 00:18:18.015 "is_configured": true, 00:18:18.015 "data_offset": 2048, 00:18:18.015 "data_size": 63488 00:18:18.015 }, 00:18:18.015 { 00:18:18.015 "name": "BaseBdev4", 00:18:18.015 "uuid": "4b2671bf-972f-5c04-88a5-43e15ded36a2", 00:18:18.015 "is_configured": true, 00:18:18.015 "data_offset": 2048, 00:18:18.015 "data_size": 63488 00:18:18.015 } 00:18:18.015 ] 00:18:18.015 }' 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.015 09:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.582 09:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:18.582 09:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.582 09:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.582 [2024-10-15 09:18:02.249006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:18.582 [2024-10-15 09:18:02.249054] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:18.582 [2024-10-15 09:18:02.252523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.582 [2024-10-15 09:18:02.252591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.582 [2024-10-15 09:18:02.252742] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:18.582 [2024-10-15 09:18:02.252759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:18:18.582 { 00:18:18.582 "results": [ 00:18:18.582 { 00:18:18.582 "job": "raid_bdev1", 00:18:18.582 "core_mask": "0x1", 00:18:18.582 "workload": "randrw", 00:18:18.582 "percentage": 50, 00:18:18.582 "status": "finished", 00:18:18.582 "queue_depth": 1, 00:18:18.582 "io_size": 131072, 00:18:18.582 "runtime": 1.393822, 00:18:18.582 "iops": 6811.486689118123, 00:18:18.582 "mibps": 851.4358361397653, 00:18:18.582 "io_failed": 0, 00:18:18.582 "io_timeout": 0, 00:18:18.582 "avg_latency_us": 142.25376103567805, 00:18:18.582 "min_latency_us": 43.28727272727273, 00:18:18.582 "max_latency_us": 1839.4763636363637 00:18:18.582 } 00:18:18.582 ], 00:18:18.582 "core_count": 1 00:18:18.582 } 00:18:18.582 09:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.582 09:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75575 00:18:18.582 09:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75575 ']' 00:18:18.582 09:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75575 00:18:18.582 09:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:18:18.582 09:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:18.582 09:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75575 00:18:18.582 09:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:18.582 09:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:18.582 killing process with pid 75575 00:18:18.582 09:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75575' 00:18:18.582 09:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75575 00:18:18.582 [2024-10-15 09:18:02.293176] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:18.582 09:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75575 00:18:18.840 [2024-10-15 09:18:02.611853] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:20.222 09:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.x2Qaykmk0B 00:18:20.222 09:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:20.222 09:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:20.222 09:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:18:20.222 09:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:18:20.222 09:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:20.222 09:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:20.222 09:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:20.222 00:18:20.222 real 0m5.072s 00:18:20.222 user 0m6.179s 00:18:20.222 sys 0m0.692s 00:18:20.222 09:18:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:20.222 09:18:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.222 ************************************ 00:18:20.222 END TEST raid_write_error_test 00:18:20.222 ************************************ 00:18:20.222 09:18:03 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:18:20.222 09:18:03 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:18:20.222 09:18:03 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:18:20.222 09:18:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:20.222 09:18:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:20.222 09:18:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:20.222 ************************************ 00:18:20.222 START TEST raid_rebuild_test 00:18:20.222 ************************************ 00:18:20.222 09:18:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:18:20.222 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:20.222 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:20.222 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:20.222 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:20.222 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:20.222 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:20.222 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:20.222 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:20.222 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75719 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75719 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 75719 ']' 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.223 09:18:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.223 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:20.223 Zero copy mechanism will not be used. 00:18:20.223 [2024-10-15 09:18:03.975714] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:18:20.223 [2024-10-15 09:18:03.975909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75719 ] 00:18:20.481 [2024-10-15 09:18:04.153801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.481 [2024-10-15 09:18:04.326216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.739 [2024-10-15 09:18:04.563847] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.739 [2024-10-15 09:18:04.563924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.998 09:18:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:20.998 09:18:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:18:20.998 09:18:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:20.998 09:18:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:20.998 09:18:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.998 09:18:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.257 BaseBdev1_malloc 00:18:21.257 09:18:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.257 09:18:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:21.257 09:18:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.257 09:18:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.257 [2024-10-15 09:18:04.954844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:21.257 [2024-10-15 09:18:04.954939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.257 [2024-10-15 09:18:04.954979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:21.257 [2024-10-15 09:18:04.955001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.257 [2024-10-15 09:18:04.958041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.257 [2024-10-15 09:18:04.958103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:21.257 BaseBdev1 00:18:21.257 09:18:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.257 09:18:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:21.257 09:18:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:21.257 09:18:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.257 09:18:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.257 BaseBdev2_malloc 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.257 [2024-10-15 09:18:05.015972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:21.257 [2024-10-15 09:18:05.016051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.257 [2024-10-15 09:18:05.016094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:21.257 [2024-10-15 09:18:05.016128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.257 [2024-10-15 09:18:05.019466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.257 [2024-10-15 09:18:05.019514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:21.257 BaseBdev2 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.257 spare_malloc 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.257 spare_delay 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.257 [2024-10-15 09:18:05.090637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:21.257 [2024-10-15 09:18:05.090715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.257 [2024-10-15 09:18:05.090750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:21.257 [2024-10-15 09:18:05.090770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.257 [2024-10-15 09:18:05.093790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.257 [2024-10-15 09:18:05.093840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:21.257 spare 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.257 [2024-10-15 09:18:05.098707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:21.257 [2024-10-15 09:18:05.101272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:21.257 [2024-10-15 09:18:05.101424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:21.257 [2024-10-15 09:18:05.101447] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:21.257 [2024-10-15 09:18:05.101802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:21.257 [2024-10-15 09:18:05.102042] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:21.257 [2024-10-15 09:18:05.102085] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:21.257 [2024-10-15 09:18:05.102310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.257 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.257 "name": "raid_bdev1", 00:18:21.258 "uuid": "aead3c30-44ed-4c43-acb1-8b81e93197c9", 00:18:21.258 "strip_size_kb": 0, 00:18:21.258 "state": "online", 00:18:21.258 "raid_level": "raid1", 00:18:21.258 "superblock": false, 00:18:21.258 "num_base_bdevs": 2, 00:18:21.258 "num_base_bdevs_discovered": 2, 00:18:21.258 "num_base_bdevs_operational": 2, 00:18:21.258 "base_bdevs_list": [ 00:18:21.258 { 00:18:21.258 "name": "BaseBdev1", 00:18:21.258 "uuid": "8b8c73e1-0eec-54c8-933e-fb6c509841ef", 00:18:21.258 "is_configured": true, 00:18:21.258 "data_offset": 0, 00:18:21.258 "data_size": 65536 00:18:21.258 }, 00:18:21.258 { 00:18:21.258 "name": "BaseBdev2", 00:18:21.258 "uuid": "e06638ce-cd24-5d97-abe5-4de0df8e51ec", 00:18:21.258 "is_configured": true, 00:18:21.258 "data_offset": 0, 00:18:21.258 "data_size": 65536 00:18:21.258 } 00:18:21.258 ] 00:18:21.258 }' 00:18:21.258 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.258 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.887 [2024-10-15 09:18:05.619339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:21.887 09:18:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:22.144 [2024-10-15 09:18:06.015103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:22.144 /dev/nbd0 00:18:22.144 09:18:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:22.144 09:18:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:22.144 09:18:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:22.144 09:18:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:18:22.144 09:18:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:22.144 09:18:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:22.144 09:18:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:22.144 09:18:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:18:22.144 09:18:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:22.144 09:18:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:22.144 09:18:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:22.144 1+0 records in 00:18:22.144 1+0 records out 00:18:22.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424804 s, 9.6 MB/s 00:18:22.144 09:18:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.144 09:18:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:18:22.144 09:18:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.403 09:18:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:22.403 09:18:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:18:22.403 09:18:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:22.403 09:18:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:22.403 09:18:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:22.403 09:18:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:22.403 09:18:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:18:28.970 65536+0 records in 00:18:28.970 65536+0 records out 00:18:28.970 33554432 bytes (34 MB, 32 MiB) copied, 6.67611 s, 5.0 MB/s 00:18:28.970 09:18:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:28.970 09:18:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:28.970 09:18:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:28.970 09:18:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:28.970 09:18:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:28.970 09:18:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:28.970 09:18:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:29.229 [2024-10-15 09:18:13.066048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.229 [2024-10-15 09:18:13.098225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.229 "name": "raid_bdev1", 00:18:29.229 "uuid": "aead3c30-44ed-4c43-acb1-8b81e93197c9", 00:18:29.229 "strip_size_kb": 0, 00:18:29.229 "state": "online", 00:18:29.229 "raid_level": "raid1", 00:18:29.229 "superblock": false, 00:18:29.229 "num_base_bdevs": 2, 00:18:29.229 "num_base_bdevs_discovered": 1, 00:18:29.229 "num_base_bdevs_operational": 1, 00:18:29.229 "base_bdevs_list": [ 00:18:29.229 { 00:18:29.229 "name": null, 00:18:29.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.229 "is_configured": false, 00:18:29.229 "data_offset": 0, 00:18:29.229 "data_size": 65536 00:18:29.229 }, 00:18:29.229 { 00:18:29.229 "name": "BaseBdev2", 00:18:29.229 "uuid": "e06638ce-cd24-5d97-abe5-4de0df8e51ec", 00:18:29.229 "is_configured": true, 00:18:29.229 "data_offset": 0, 00:18:29.229 "data_size": 65536 00:18:29.229 } 00:18:29.229 ] 00:18:29.229 }' 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.229 09:18:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.796 09:18:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:29.796 09:18:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.796 09:18:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.796 [2024-10-15 09:18:13.610493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:29.796 [2024-10-15 09:18:13.628966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:18:29.796 09:18:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.796 09:18:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:29.796 [2024-10-15 09:18:13.632631] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:30.731 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.731 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.731 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.731 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.731 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.731 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.731 09:18:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.731 09:18:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.731 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.731 09:18:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.990 "name": "raid_bdev1", 00:18:30.990 "uuid": "aead3c30-44ed-4c43-acb1-8b81e93197c9", 00:18:30.990 "strip_size_kb": 0, 00:18:30.990 "state": "online", 00:18:30.990 "raid_level": "raid1", 00:18:30.990 "superblock": false, 00:18:30.990 "num_base_bdevs": 2, 00:18:30.990 "num_base_bdevs_discovered": 2, 00:18:30.990 "num_base_bdevs_operational": 2, 00:18:30.990 "process": { 00:18:30.990 "type": "rebuild", 00:18:30.990 "target": "spare", 00:18:30.990 "progress": { 00:18:30.990 "blocks": 18432, 00:18:30.990 "percent": 28 00:18:30.990 } 00:18:30.990 }, 00:18:30.990 "base_bdevs_list": [ 00:18:30.990 { 00:18:30.990 "name": "spare", 00:18:30.990 "uuid": "5a05bd1c-731c-564c-9ed7-1f663822a606", 00:18:30.990 "is_configured": true, 00:18:30.990 "data_offset": 0, 00:18:30.990 "data_size": 65536 00:18:30.990 }, 00:18:30.990 { 00:18:30.990 "name": "BaseBdev2", 00:18:30.990 "uuid": "e06638ce-cd24-5d97-abe5-4de0df8e51ec", 00:18:30.990 "is_configured": true, 00:18:30.990 "data_offset": 0, 00:18:30.990 "data_size": 65536 00:18:30.990 } 00:18:30.990 ] 00:18:30.990 }' 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.990 [2024-10-15 09:18:14.803106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.990 [2024-10-15 09:18:14.845217] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:30.990 [2024-10-15 09:18:14.845355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.990 [2024-10-15 09:18:14.845381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.990 [2024-10-15 09:18:14.845399] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.990 09:18:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.248 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.248 "name": "raid_bdev1", 00:18:31.248 "uuid": "aead3c30-44ed-4c43-acb1-8b81e93197c9", 00:18:31.248 "strip_size_kb": 0, 00:18:31.248 "state": "online", 00:18:31.248 "raid_level": "raid1", 00:18:31.248 "superblock": false, 00:18:31.248 "num_base_bdevs": 2, 00:18:31.248 "num_base_bdevs_discovered": 1, 00:18:31.248 "num_base_bdevs_operational": 1, 00:18:31.248 "base_bdevs_list": [ 00:18:31.248 { 00:18:31.248 "name": null, 00:18:31.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.248 "is_configured": false, 00:18:31.248 "data_offset": 0, 00:18:31.248 "data_size": 65536 00:18:31.248 }, 00:18:31.248 { 00:18:31.248 "name": "BaseBdev2", 00:18:31.248 "uuid": "e06638ce-cd24-5d97-abe5-4de0df8e51ec", 00:18:31.248 "is_configured": true, 00:18:31.248 "data_offset": 0, 00:18:31.248 "data_size": 65536 00:18:31.248 } 00:18:31.248 ] 00:18:31.248 }' 00:18:31.248 09:18:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.248 09:18:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.507 09:18:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:31.507 09:18:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.507 09:18:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:31.507 09:18:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:31.507 09:18:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.507 09:18:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.507 09:18:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.507 09:18:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.507 09:18:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.507 09:18:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.507 09:18:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.507 "name": "raid_bdev1", 00:18:31.507 "uuid": "aead3c30-44ed-4c43-acb1-8b81e93197c9", 00:18:31.507 "strip_size_kb": 0, 00:18:31.507 "state": "online", 00:18:31.507 "raid_level": "raid1", 00:18:31.507 "superblock": false, 00:18:31.507 "num_base_bdevs": 2, 00:18:31.507 "num_base_bdevs_discovered": 1, 00:18:31.507 "num_base_bdevs_operational": 1, 00:18:31.507 "base_bdevs_list": [ 00:18:31.507 { 00:18:31.507 "name": null, 00:18:31.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.507 "is_configured": false, 00:18:31.507 "data_offset": 0, 00:18:31.507 "data_size": 65536 00:18:31.507 }, 00:18:31.507 { 00:18:31.507 "name": "BaseBdev2", 00:18:31.507 "uuid": "e06638ce-cd24-5d97-abe5-4de0df8e51ec", 00:18:31.507 "is_configured": true, 00:18:31.507 "data_offset": 0, 00:18:31.507 "data_size": 65536 00:18:31.507 } 00:18:31.507 ] 00:18:31.507 }' 00:18:31.507 09:18:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.764 09:18:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:31.764 09:18:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.764 09:18:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:31.764 09:18:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:31.764 09:18:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.764 09:18:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.764 [2024-10-15 09:18:15.564145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:31.764 [2024-10-15 09:18:15.581232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:18:31.764 09:18:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.764 09:18:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:31.764 [2024-10-15 09:18:15.584340] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:32.697 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.697 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.697 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.697 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.697 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.697 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.697 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.697 09:18:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.697 09:18:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.697 09:18:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.956 "name": "raid_bdev1", 00:18:32.956 "uuid": "aead3c30-44ed-4c43-acb1-8b81e93197c9", 00:18:32.956 "strip_size_kb": 0, 00:18:32.956 "state": "online", 00:18:32.956 "raid_level": "raid1", 00:18:32.956 "superblock": false, 00:18:32.956 "num_base_bdevs": 2, 00:18:32.956 "num_base_bdevs_discovered": 2, 00:18:32.956 "num_base_bdevs_operational": 2, 00:18:32.956 "process": { 00:18:32.956 "type": "rebuild", 00:18:32.956 "target": "spare", 00:18:32.956 "progress": { 00:18:32.956 "blocks": 20480, 00:18:32.956 "percent": 31 00:18:32.956 } 00:18:32.956 }, 00:18:32.956 "base_bdevs_list": [ 00:18:32.956 { 00:18:32.956 "name": "spare", 00:18:32.956 "uuid": "5a05bd1c-731c-564c-9ed7-1f663822a606", 00:18:32.956 "is_configured": true, 00:18:32.956 "data_offset": 0, 00:18:32.956 "data_size": 65536 00:18:32.956 }, 00:18:32.956 { 00:18:32.956 "name": "BaseBdev2", 00:18:32.956 "uuid": "e06638ce-cd24-5d97-abe5-4de0df8e51ec", 00:18:32.956 "is_configured": true, 00:18:32.956 "data_offset": 0, 00:18:32.956 "data_size": 65536 00:18:32.956 } 00:18:32.956 ] 00:18:32.956 }' 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=407 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.956 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.956 "name": "raid_bdev1", 00:18:32.956 "uuid": "aead3c30-44ed-4c43-acb1-8b81e93197c9", 00:18:32.956 "strip_size_kb": 0, 00:18:32.956 "state": "online", 00:18:32.956 "raid_level": "raid1", 00:18:32.957 "superblock": false, 00:18:32.957 "num_base_bdevs": 2, 00:18:32.957 "num_base_bdevs_discovered": 2, 00:18:32.957 "num_base_bdevs_operational": 2, 00:18:32.957 "process": { 00:18:32.957 "type": "rebuild", 00:18:32.957 "target": "spare", 00:18:32.957 "progress": { 00:18:32.957 "blocks": 22528, 00:18:32.957 "percent": 34 00:18:32.957 } 00:18:32.957 }, 00:18:32.957 "base_bdevs_list": [ 00:18:32.957 { 00:18:32.957 "name": "spare", 00:18:32.957 "uuid": "5a05bd1c-731c-564c-9ed7-1f663822a606", 00:18:32.957 "is_configured": true, 00:18:32.957 "data_offset": 0, 00:18:32.957 "data_size": 65536 00:18:32.957 }, 00:18:32.957 { 00:18:32.957 "name": "BaseBdev2", 00:18:32.957 "uuid": "e06638ce-cd24-5d97-abe5-4de0df8e51ec", 00:18:32.957 "is_configured": true, 00:18:32.957 "data_offset": 0, 00:18:32.957 "data_size": 65536 00:18:32.957 } 00:18:32.957 ] 00:18:32.957 }' 00:18:32.957 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.957 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:32.957 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.215 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.215 09:18:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:34.152 09:18:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:34.152 09:18:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:34.152 09:18:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.152 09:18:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:34.152 09:18:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:34.152 09:18:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.152 09:18:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.152 09:18:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.152 09:18:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.152 09:18:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.152 09:18:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.152 09:18:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.152 "name": "raid_bdev1", 00:18:34.152 "uuid": "aead3c30-44ed-4c43-acb1-8b81e93197c9", 00:18:34.152 "strip_size_kb": 0, 00:18:34.152 "state": "online", 00:18:34.152 "raid_level": "raid1", 00:18:34.152 "superblock": false, 00:18:34.152 "num_base_bdevs": 2, 00:18:34.152 "num_base_bdevs_discovered": 2, 00:18:34.152 "num_base_bdevs_operational": 2, 00:18:34.152 "process": { 00:18:34.152 "type": "rebuild", 00:18:34.152 "target": "spare", 00:18:34.152 "progress": { 00:18:34.152 "blocks": 47104, 00:18:34.152 "percent": 71 00:18:34.152 } 00:18:34.152 }, 00:18:34.152 "base_bdevs_list": [ 00:18:34.152 { 00:18:34.152 "name": "spare", 00:18:34.152 "uuid": "5a05bd1c-731c-564c-9ed7-1f663822a606", 00:18:34.152 "is_configured": true, 00:18:34.152 "data_offset": 0, 00:18:34.152 "data_size": 65536 00:18:34.152 }, 00:18:34.152 { 00:18:34.152 "name": "BaseBdev2", 00:18:34.152 "uuid": "e06638ce-cd24-5d97-abe5-4de0df8e51ec", 00:18:34.152 "is_configured": true, 00:18:34.152 "data_offset": 0, 00:18:34.152 "data_size": 65536 00:18:34.152 } 00:18:34.152 ] 00:18:34.152 }' 00:18:34.152 09:18:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.152 09:18:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:34.152 09:18:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.411 09:18:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.411 09:18:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:34.978 [2024-10-15 09:18:18.814940] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:34.978 [2024-10-15 09:18:18.815071] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:34.978 [2024-10-15 09:18:18.815180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.236 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:35.236 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.236 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.236 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.236 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.236 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.236 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.236 09:18:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.236 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.236 09:18:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.236 09:18:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.236 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.236 "name": "raid_bdev1", 00:18:35.236 "uuid": "aead3c30-44ed-4c43-acb1-8b81e93197c9", 00:18:35.236 "strip_size_kb": 0, 00:18:35.236 "state": "online", 00:18:35.236 "raid_level": "raid1", 00:18:35.236 "superblock": false, 00:18:35.236 "num_base_bdevs": 2, 00:18:35.236 "num_base_bdevs_discovered": 2, 00:18:35.236 "num_base_bdevs_operational": 2, 00:18:35.236 "base_bdevs_list": [ 00:18:35.236 { 00:18:35.236 "name": "spare", 00:18:35.236 "uuid": "5a05bd1c-731c-564c-9ed7-1f663822a606", 00:18:35.236 "is_configured": true, 00:18:35.236 "data_offset": 0, 00:18:35.236 "data_size": 65536 00:18:35.236 }, 00:18:35.236 { 00:18:35.236 "name": "BaseBdev2", 00:18:35.236 "uuid": "e06638ce-cd24-5d97-abe5-4de0df8e51ec", 00:18:35.236 "is_configured": true, 00:18:35.236 "data_offset": 0, 00:18:35.236 "data_size": 65536 00:18:35.236 } 00:18:35.236 ] 00:18:35.236 }' 00:18:35.236 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.495 "name": "raid_bdev1", 00:18:35.495 "uuid": "aead3c30-44ed-4c43-acb1-8b81e93197c9", 00:18:35.495 "strip_size_kb": 0, 00:18:35.495 "state": "online", 00:18:35.495 "raid_level": "raid1", 00:18:35.495 "superblock": false, 00:18:35.495 "num_base_bdevs": 2, 00:18:35.495 "num_base_bdevs_discovered": 2, 00:18:35.495 "num_base_bdevs_operational": 2, 00:18:35.495 "base_bdevs_list": [ 00:18:35.495 { 00:18:35.495 "name": "spare", 00:18:35.495 "uuid": "5a05bd1c-731c-564c-9ed7-1f663822a606", 00:18:35.495 "is_configured": true, 00:18:35.495 "data_offset": 0, 00:18:35.495 "data_size": 65536 00:18:35.495 }, 00:18:35.495 { 00:18:35.495 "name": "BaseBdev2", 00:18:35.495 "uuid": "e06638ce-cd24-5d97-abe5-4de0df8e51ec", 00:18:35.495 "is_configured": true, 00:18:35.495 "data_offset": 0, 00:18:35.495 "data_size": 65536 00:18:35.495 } 00:18:35.495 ] 00:18:35.495 }' 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.495 09:18:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.754 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.754 "name": "raid_bdev1", 00:18:35.754 "uuid": "aead3c30-44ed-4c43-acb1-8b81e93197c9", 00:18:35.754 "strip_size_kb": 0, 00:18:35.754 "state": "online", 00:18:35.754 "raid_level": "raid1", 00:18:35.754 "superblock": false, 00:18:35.754 "num_base_bdevs": 2, 00:18:35.754 "num_base_bdevs_discovered": 2, 00:18:35.754 "num_base_bdevs_operational": 2, 00:18:35.754 "base_bdevs_list": [ 00:18:35.754 { 00:18:35.754 "name": "spare", 00:18:35.754 "uuid": "5a05bd1c-731c-564c-9ed7-1f663822a606", 00:18:35.754 "is_configured": true, 00:18:35.754 "data_offset": 0, 00:18:35.754 "data_size": 65536 00:18:35.754 }, 00:18:35.754 { 00:18:35.754 "name": "BaseBdev2", 00:18:35.754 "uuid": "e06638ce-cd24-5d97-abe5-4de0df8e51ec", 00:18:35.754 "is_configured": true, 00:18:35.754 "data_offset": 0, 00:18:35.754 "data_size": 65536 00:18:35.754 } 00:18:35.754 ] 00:18:35.754 }' 00:18:35.754 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.754 09:18:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.013 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:36.013 09:18:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.013 09:18:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.013 [2024-10-15 09:18:19.925592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.013 [2024-10-15 09:18:19.925638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:36.013 [2024-10-15 09:18:19.925775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:36.013 [2024-10-15 09:18:19.925883] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:36.013 [2024-10-15 09:18:19.925902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:36.013 09:18:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.013 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:36.013 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.013 09:18:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.013 09:18:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.312 09:18:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.312 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:36.312 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:36.312 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:36.312 09:18:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:36.312 09:18:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:36.312 09:18:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:36.312 09:18:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:36.312 09:18:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:36.312 09:18:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:36.312 09:18:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:36.312 09:18:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:36.312 09:18:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:36.312 09:18:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:36.572 /dev/nbd0 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:36.572 1+0 records in 00:18:36.572 1+0 records out 00:18:36.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403023 s, 10.2 MB/s 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:36.572 09:18:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:36.831 /dev/nbd1 00:18:36.831 09:18:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:36.831 09:18:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:36.831 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:36.831 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:18:36.831 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:36.831 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:36.832 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:36.832 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:18:36.832 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:36.832 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:36.832 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:36.832 1+0 records in 00:18:36.832 1+0 records out 00:18:36.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545187 s, 7.5 MB/s 00:18:36.832 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.832 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:18:36.832 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.832 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:36.832 09:18:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:18:36.832 09:18:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:36.832 09:18:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:36.832 09:18:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:37.091 09:18:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:37.091 09:18:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:37.091 09:18:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:37.091 09:18:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:37.091 09:18:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:37.091 09:18:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:37.091 09:18:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:37.350 09:18:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:37.350 09:18:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:37.350 09:18:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:37.350 09:18:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:37.350 09:18:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:37.350 09:18:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:37.350 09:18:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:37.350 09:18:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:37.350 09:18:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:37.350 09:18:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75719 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 75719 ']' 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 75719 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75719 00:18:37.609 killing process with pid 75719 00:18:37.609 Received shutdown signal, test time was about 60.000000 seconds 00:18:37.609 00:18:37.609 Latency(us) 00:18:37.609 [2024-10-15T09:18:21.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.609 [2024-10-15T09:18:21.537Z] =================================================================================================================== 00:18:37.609 [2024-10-15T09:18:21.537Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75719' 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 75719 00:18:37.609 09:18:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 75719 00:18:37.609 [2024-10-15 09:18:21.525103] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:38.175 [2024-10-15 09:18:21.876639] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:39.110 ************************************ 00:18:39.110 END TEST raid_rebuild_test 00:18:39.110 ************************************ 00:18:39.110 09:18:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:39.110 00:18:39.110 real 0m19.152s 00:18:39.110 user 0m21.374s 00:18:39.110 sys 0m3.552s 00:18:39.110 09:18:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:39.110 09:18:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.369 09:18:23 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:18:39.369 09:18:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:39.369 09:18:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:39.369 09:18:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:39.369 ************************************ 00:18:39.369 START TEST raid_rebuild_test_sb 00:18:39.369 ************************************ 00:18:39.369 09:18:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:18:39.369 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:39.369 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:39.369 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:39.369 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:39.369 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76179 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76179 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 76179 ']' 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.370 09:18:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.370 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:39.370 Zero copy mechanism will not be used. 00:18:39.370 [2024-10-15 09:18:23.178897] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:18:39.370 [2024-10-15 09:18:23.179076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76179 ] 00:18:39.629 [2024-10-15 09:18:23.346945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.629 [2024-10-15 09:18:23.492561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.887 [2024-10-15 09:18:23.715668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.887 [2024-10-15 09:18:23.715765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.455 BaseBdev1_malloc 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.455 [2024-10-15 09:18:24.219141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:40.455 [2024-10-15 09:18:24.219388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.455 [2024-10-15 09:18:24.219436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:40.455 [2024-10-15 09:18:24.219458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.455 [2024-10-15 09:18:24.222522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.455 [2024-10-15 09:18:24.222693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:40.455 BaseBdev1 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.455 BaseBdev2_malloc 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.455 [2024-10-15 09:18:24.275084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:40.455 [2024-10-15 09:18:24.275189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.455 [2024-10-15 09:18:24.275222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:40.455 [2024-10-15 09:18:24.275243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.455 [2024-10-15 09:18:24.278283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.455 [2024-10-15 09:18:24.278331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:40.455 BaseBdev2 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.455 spare_malloc 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.455 spare_delay 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.455 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.455 [2024-10-15 09:18:24.352593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:40.455 [2024-10-15 09:18:24.352806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.455 [2024-10-15 09:18:24.352848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:40.455 [2024-10-15 09:18:24.352869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.456 [2024-10-15 09:18:24.355892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.456 [2024-10-15 09:18:24.356061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:40.456 spare 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.456 [2024-10-15 09:18:24.360797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:40.456 [2024-10-15 09:18:24.363369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:40.456 [2024-10-15 09:18:24.363607] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:40.456 [2024-10-15 09:18:24.363634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:40.456 [2024-10-15 09:18:24.364005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:40.456 [2024-10-15 09:18:24.364269] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:40.456 [2024-10-15 09:18:24.364290] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:40.456 [2024-10-15 09:18:24.364515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.456 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.714 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.714 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.714 "name": "raid_bdev1", 00:18:40.714 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:18:40.714 "strip_size_kb": 0, 00:18:40.714 "state": "online", 00:18:40.714 "raid_level": "raid1", 00:18:40.714 "superblock": true, 00:18:40.714 "num_base_bdevs": 2, 00:18:40.714 "num_base_bdevs_discovered": 2, 00:18:40.714 "num_base_bdevs_operational": 2, 00:18:40.714 "base_bdevs_list": [ 00:18:40.714 { 00:18:40.714 "name": "BaseBdev1", 00:18:40.714 "uuid": "5903772a-94dc-5133-89be-d9d978ce30ab", 00:18:40.714 "is_configured": true, 00:18:40.714 "data_offset": 2048, 00:18:40.714 "data_size": 63488 00:18:40.714 }, 00:18:40.714 { 00:18:40.714 "name": "BaseBdev2", 00:18:40.714 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:18:40.714 "is_configured": true, 00:18:40.714 "data_offset": 2048, 00:18:40.714 "data_size": 63488 00:18:40.714 } 00:18:40.714 ] 00:18:40.714 }' 00:18:40.714 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.714 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.973 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:40.973 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:40.973 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.973 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.973 [2024-10-15 09:18:24.825333] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.973 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.973 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:18:40.973 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.973 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.973 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.973 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:40.973 09:18:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.231 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:41.231 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:41.231 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:41.231 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:41.231 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:41.231 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:41.231 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:41.231 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:41.231 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:41.231 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:41.231 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:41.231 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:41.231 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:41.231 09:18:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:41.490 [2024-10-15 09:18:25.181137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:41.490 /dev/nbd0 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:41.490 1+0 records in 00:18:41.490 1+0 records out 00:18:41.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373197 s, 11.0 MB/s 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:41.490 09:18:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:18:48.091 63488+0 records in 00:18:48.091 63488+0 records out 00:18:48.091 32505856 bytes (33 MB, 31 MiB) copied, 6.43746 s, 5.0 MB/s 00:18:48.091 09:18:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:48.091 09:18:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:48.091 09:18:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:48.091 09:18:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:48.091 09:18:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:48.091 09:18:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.091 09:18:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:48.349 [2024-10-15 09:18:32.027894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.349 [2024-10-15 09:18:32.068041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.349 "name": "raid_bdev1", 00:18:48.349 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:18:48.349 "strip_size_kb": 0, 00:18:48.349 "state": "online", 00:18:48.349 "raid_level": "raid1", 00:18:48.349 "superblock": true, 00:18:48.349 "num_base_bdevs": 2, 00:18:48.349 "num_base_bdevs_discovered": 1, 00:18:48.349 "num_base_bdevs_operational": 1, 00:18:48.349 "base_bdevs_list": [ 00:18:48.349 { 00:18:48.349 "name": null, 00:18:48.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.349 "is_configured": false, 00:18:48.349 "data_offset": 0, 00:18:48.349 "data_size": 63488 00:18:48.349 }, 00:18:48.349 { 00:18:48.349 "name": "BaseBdev2", 00:18:48.349 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:18:48.349 "is_configured": true, 00:18:48.349 "data_offset": 2048, 00:18:48.349 "data_size": 63488 00:18:48.349 } 00:18:48.349 ] 00:18:48.349 }' 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.349 09:18:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.915 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:48.915 09:18:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.915 09:18:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.915 [2024-10-15 09:18:32.640298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:48.915 [2024-10-15 09:18:32.658745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:18:48.915 09:18:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.915 09:18:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:48.915 [2024-10-15 09:18:32.661766] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:49.850 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.850 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.850 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.850 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.850 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.850 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.850 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.850 09:18:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.850 09:18:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.850 09:18:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.850 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.850 "name": "raid_bdev1", 00:18:49.850 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:18:49.850 "strip_size_kb": 0, 00:18:49.850 "state": "online", 00:18:49.850 "raid_level": "raid1", 00:18:49.850 "superblock": true, 00:18:49.850 "num_base_bdevs": 2, 00:18:49.850 "num_base_bdevs_discovered": 2, 00:18:49.850 "num_base_bdevs_operational": 2, 00:18:49.850 "process": { 00:18:49.850 "type": "rebuild", 00:18:49.850 "target": "spare", 00:18:49.850 "progress": { 00:18:49.850 "blocks": 18432, 00:18:49.850 "percent": 29 00:18:49.850 } 00:18:49.850 }, 00:18:49.850 "base_bdevs_list": [ 00:18:49.850 { 00:18:49.850 "name": "spare", 00:18:49.850 "uuid": "5a50d7ab-5aa2-598e-baa1-35281a461b5c", 00:18:49.850 "is_configured": true, 00:18:49.850 "data_offset": 2048, 00:18:49.850 "data_size": 63488 00:18:49.850 }, 00:18:49.850 { 00:18:49.850 "name": "BaseBdev2", 00:18:49.850 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:18:49.850 "is_configured": true, 00:18:49.850 "data_offset": 2048, 00:18:49.850 "data_size": 63488 00:18:49.850 } 00:18:49.850 ] 00:18:49.850 }' 00:18:49.850 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.116 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.116 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.116 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.116 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.117 [2024-10-15 09:18:33.848233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.117 [2024-10-15 09:18:33.874449] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:50.117 [2024-10-15 09:18:33.874606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.117 [2024-10-15 09:18:33.874645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.117 [2024-10-15 09:18:33.874678] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.117 "name": "raid_bdev1", 00:18:50.117 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:18:50.117 "strip_size_kb": 0, 00:18:50.117 "state": "online", 00:18:50.117 "raid_level": "raid1", 00:18:50.117 "superblock": true, 00:18:50.117 "num_base_bdevs": 2, 00:18:50.117 "num_base_bdevs_discovered": 1, 00:18:50.117 "num_base_bdevs_operational": 1, 00:18:50.117 "base_bdevs_list": [ 00:18:50.117 { 00:18:50.117 "name": null, 00:18:50.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.117 "is_configured": false, 00:18:50.117 "data_offset": 0, 00:18:50.117 "data_size": 63488 00:18:50.117 }, 00:18:50.117 { 00:18:50.117 "name": "BaseBdev2", 00:18:50.117 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:18:50.117 "is_configured": true, 00:18:50.117 "data_offset": 2048, 00:18:50.117 "data_size": 63488 00:18:50.117 } 00:18:50.117 ] 00:18:50.117 }' 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.117 09:18:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.684 09:18:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:50.684 09:18:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.684 09:18:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:50.684 09:18:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:50.684 09:18:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.684 09:18:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.684 09:18:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.684 09:18:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.684 09:18:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.684 09:18:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.684 09:18:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.684 "name": "raid_bdev1", 00:18:50.684 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:18:50.684 "strip_size_kb": 0, 00:18:50.684 "state": "online", 00:18:50.684 "raid_level": "raid1", 00:18:50.684 "superblock": true, 00:18:50.684 "num_base_bdevs": 2, 00:18:50.684 "num_base_bdevs_discovered": 1, 00:18:50.684 "num_base_bdevs_operational": 1, 00:18:50.684 "base_bdevs_list": [ 00:18:50.684 { 00:18:50.684 "name": null, 00:18:50.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.684 "is_configured": false, 00:18:50.684 "data_offset": 0, 00:18:50.684 "data_size": 63488 00:18:50.684 }, 00:18:50.684 { 00:18:50.684 "name": "BaseBdev2", 00:18:50.684 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:18:50.684 "is_configured": true, 00:18:50.684 "data_offset": 2048, 00:18:50.684 "data_size": 63488 00:18:50.684 } 00:18:50.684 ] 00:18:50.684 }' 00:18:50.684 09:18:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.684 09:18:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:50.684 09:18:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.684 09:18:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:50.684 09:18:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:50.684 09:18:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.684 09:18:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.942 [2024-10-15 09:18:34.613802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.942 [2024-10-15 09:18:34.630821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:18:50.942 09:18:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.942 09:18:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:50.942 [2024-10-15 09:18:34.633601] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:51.876 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.876 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.876 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.876 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.876 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.876 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.876 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.876 09:18:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.876 09:18:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.876 09:18:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.876 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.876 "name": "raid_bdev1", 00:18:51.876 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:18:51.876 "strip_size_kb": 0, 00:18:51.876 "state": "online", 00:18:51.876 "raid_level": "raid1", 00:18:51.876 "superblock": true, 00:18:51.876 "num_base_bdevs": 2, 00:18:51.876 "num_base_bdevs_discovered": 2, 00:18:51.876 "num_base_bdevs_operational": 2, 00:18:51.876 "process": { 00:18:51.876 "type": "rebuild", 00:18:51.876 "target": "spare", 00:18:51.876 "progress": { 00:18:51.876 "blocks": 18432, 00:18:51.876 "percent": 29 00:18:51.876 } 00:18:51.876 }, 00:18:51.876 "base_bdevs_list": [ 00:18:51.876 { 00:18:51.876 "name": "spare", 00:18:51.876 "uuid": "5a50d7ab-5aa2-598e-baa1-35281a461b5c", 00:18:51.876 "is_configured": true, 00:18:51.876 "data_offset": 2048, 00:18:51.876 "data_size": 63488 00:18:51.877 }, 00:18:51.877 { 00:18:51.877 "name": "BaseBdev2", 00:18:51.877 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:18:51.877 "is_configured": true, 00:18:51.877 "data_offset": 2048, 00:18:51.877 "data_size": 63488 00:18:51.877 } 00:18:51.877 ] 00:18:51.877 }' 00:18:51.877 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.877 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.877 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.877 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.877 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:51.877 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:51.877 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:51.877 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:51.877 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:51.877 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:51.877 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=426 00:18:51.877 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:51.877 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.877 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.877 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.877 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.877 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.135 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.135 09:18:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.135 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.135 09:18:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.135 09:18:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.135 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.135 "name": "raid_bdev1", 00:18:52.135 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:18:52.135 "strip_size_kb": 0, 00:18:52.135 "state": "online", 00:18:52.135 "raid_level": "raid1", 00:18:52.135 "superblock": true, 00:18:52.135 "num_base_bdevs": 2, 00:18:52.135 "num_base_bdevs_discovered": 2, 00:18:52.135 "num_base_bdevs_operational": 2, 00:18:52.135 "process": { 00:18:52.135 "type": "rebuild", 00:18:52.135 "target": "spare", 00:18:52.135 "progress": { 00:18:52.135 "blocks": 22528, 00:18:52.135 "percent": 35 00:18:52.135 } 00:18:52.135 }, 00:18:52.135 "base_bdevs_list": [ 00:18:52.135 { 00:18:52.135 "name": "spare", 00:18:52.135 "uuid": "5a50d7ab-5aa2-598e-baa1-35281a461b5c", 00:18:52.135 "is_configured": true, 00:18:52.135 "data_offset": 2048, 00:18:52.135 "data_size": 63488 00:18:52.135 }, 00:18:52.135 { 00:18:52.135 "name": "BaseBdev2", 00:18:52.135 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:18:52.135 "is_configured": true, 00:18:52.135 "data_offset": 2048, 00:18:52.135 "data_size": 63488 00:18:52.135 } 00:18:52.135 ] 00:18:52.135 }' 00:18:52.135 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.135 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.135 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.135 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.135 09:18:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:53.071 09:18:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:53.071 09:18:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.071 09:18:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.071 09:18:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.071 09:18:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.071 09:18:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.071 09:18:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.071 09:18:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.071 09:18:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.071 09:18:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.329 09:18:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.329 09:18:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.329 "name": "raid_bdev1", 00:18:53.329 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:18:53.329 "strip_size_kb": 0, 00:18:53.329 "state": "online", 00:18:53.329 "raid_level": "raid1", 00:18:53.329 "superblock": true, 00:18:53.329 "num_base_bdevs": 2, 00:18:53.329 "num_base_bdevs_discovered": 2, 00:18:53.329 "num_base_bdevs_operational": 2, 00:18:53.329 "process": { 00:18:53.329 "type": "rebuild", 00:18:53.329 "target": "spare", 00:18:53.329 "progress": { 00:18:53.329 "blocks": 47104, 00:18:53.329 "percent": 74 00:18:53.329 } 00:18:53.329 }, 00:18:53.329 "base_bdevs_list": [ 00:18:53.329 { 00:18:53.329 "name": "spare", 00:18:53.329 "uuid": "5a50d7ab-5aa2-598e-baa1-35281a461b5c", 00:18:53.329 "is_configured": true, 00:18:53.329 "data_offset": 2048, 00:18:53.329 "data_size": 63488 00:18:53.329 }, 00:18:53.329 { 00:18:53.329 "name": "BaseBdev2", 00:18:53.329 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:18:53.329 "is_configured": true, 00:18:53.329 "data_offset": 2048, 00:18:53.329 "data_size": 63488 00:18:53.329 } 00:18:53.329 ] 00:18:53.329 }' 00:18:53.329 09:18:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.329 09:18:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.329 09:18:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.329 09:18:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.329 09:18:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:53.895 [2024-10-15 09:18:37.763601] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:53.895 [2024-10-15 09:18:37.763733] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:53.895 [2024-10-15 09:18:37.763925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.463 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:54.463 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.463 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.463 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.463 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.463 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.463 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.463 09:18:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.463 09:18:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.463 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.463 09:18:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.463 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.463 "name": "raid_bdev1", 00:18:54.463 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:18:54.463 "strip_size_kb": 0, 00:18:54.463 "state": "online", 00:18:54.463 "raid_level": "raid1", 00:18:54.463 "superblock": true, 00:18:54.463 "num_base_bdevs": 2, 00:18:54.463 "num_base_bdevs_discovered": 2, 00:18:54.463 "num_base_bdevs_operational": 2, 00:18:54.463 "base_bdevs_list": [ 00:18:54.463 { 00:18:54.463 "name": "spare", 00:18:54.463 "uuid": "5a50d7ab-5aa2-598e-baa1-35281a461b5c", 00:18:54.463 "is_configured": true, 00:18:54.463 "data_offset": 2048, 00:18:54.463 "data_size": 63488 00:18:54.463 }, 00:18:54.463 { 00:18:54.463 "name": "BaseBdev2", 00:18:54.463 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:18:54.463 "is_configured": true, 00:18:54.463 "data_offset": 2048, 00:18:54.463 "data_size": 63488 00:18:54.463 } 00:18:54.463 ] 00:18:54.463 }' 00:18:54.464 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.464 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:54.464 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.464 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:54.464 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:54.464 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.464 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.464 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:54.464 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:54.464 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.464 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.464 09:18:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.464 09:18:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.464 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.464 09:18:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.464 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.464 "name": "raid_bdev1", 00:18:54.464 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:18:54.464 "strip_size_kb": 0, 00:18:54.464 "state": "online", 00:18:54.464 "raid_level": "raid1", 00:18:54.464 "superblock": true, 00:18:54.464 "num_base_bdevs": 2, 00:18:54.464 "num_base_bdevs_discovered": 2, 00:18:54.464 "num_base_bdevs_operational": 2, 00:18:54.464 "base_bdevs_list": [ 00:18:54.464 { 00:18:54.464 "name": "spare", 00:18:54.464 "uuid": "5a50d7ab-5aa2-598e-baa1-35281a461b5c", 00:18:54.464 "is_configured": true, 00:18:54.464 "data_offset": 2048, 00:18:54.464 "data_size": 63488 00:18:54.464 }, 00:18:54.464 { 00:18:54.464 "name": "BaseBdev2", 00:18:54.464 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:18:54.464 "is_configured": true, 00:18:54.464 "data_offset": 2048, 00:18:54.464 "data_size": 63488 00:18:54.464 } 00:18:54.464 ] 00:18:54.464 }' 00:18:54.464 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.722 "name": "raid_bdev1", 00:18:54.722 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:18:54.722 "strip_size_kb": 0, 00:18:54.722 "state": "online", 00:18:54.722 "raid_level": "raid1", 00:18:54.722 "superblock": true, 00:18:54.722 "num_base_bdevs": 2, 00:18:54.722 "num_base_bdevs_discovered": 2, 00:18:54.722 "num_base_bdevs_operational": 2, 00:18:54.722 "base_bdevs_list": [ 00:18:54.722 { 00:18:54.722 "name": "spare", 00:18:54.722 "uuid": "5a50d7ab-5aa2-598e-baa1-35281a461b5c", 00:18:54.722 "is_configured": true, 00:18:54.722 "data_offset": 2048, 00:18:54.722 "data_size": 63488 00:18:54.722 }, 00:18:54.722 { 00:18:54.722 "name": "BaseBdev2", 00:18:54.722 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:18:54.722 "is_configured": true, 00:18:54.722 "data_offset": 2048, 00:18:54.722 "data_size": 63488 00:18:54.722 } 00:18:54.722 ] 00:18:54.722 }' 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.722 09:18:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.289 [2024-10-15 09:18:39.013866] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:55.289 [2024-10-15 09:18:39.013917] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.289 [2024-10-15 09:18:39.014040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.289 [2024-10-15 09:18:39.014181] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.289 [2024-10-15 09:18:39.014214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:55.289 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:55.547 /dev/nbd0 00:18:55.547 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:55.547 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:55.547 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:55.547 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:18:55.547 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:55.547 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:55.547 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:55.547 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:18:55.547 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:55.547 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:55.547 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:55.547 1+0 records in 00:18:55.547 1+0 records out 00:18:55.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369584 s, 11.1 MB/s 00:18:55.547 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:55.547 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:18:55.547 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:55.805 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:55.805 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:18:55.805 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:55.805 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:55.805 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:56.126 /dev/nbd1 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:56.126 1+0 records in 00:18:56.126 1+0 records out 00:18:56.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539238 s, 7.6 MB/s 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:56.126 09:18:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:56.694 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:56.694 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:56.694 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:56.694 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:56.694 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:56.694 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:56.694 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:56.694 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:56.694 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:56.694 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.954 [2024-10-15 09:18:40.679191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:56.954 [2024-10-15 09:18:40.679279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.954 [2024-10-15 09:18:40.679318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:56.954 [2024-10-15 09:18:40.679335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.954 [2024-10-15 09:18:40.682563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.954 [2024-10-15 09:18:40.682616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:56.954 [2024-10-15 09:18:40.682778] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:56.954 [2024-10-15 09:18:40.682860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:56.954 [2024-10-15 09:18:40.683066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:56.954 spare 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.954 [2024-10-15 09:18:40.783290] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:56.954 [2024-10-15 09:18:40.783374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:56.954 [2024-10-15 09:18:40.783865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:18:56.954 [2024-10-15 09:18:40.784187] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:56.954 [2024-10-15 09:18:40.784217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:56.954 [2024-10-15 09:18:40.784472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.954 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.954 "name": "raid_bdev1", 00:18:56.954 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:18:56.954 "strip_size_kb": 0, 00:18:56.954 "state": "online", 00:18:56.954 "raid_level": "raid1", 00:18:56.954 "superblock": true, 00:18:56.954 "num_base_bdevs": 2, 00:18:56.954 "num_base_bdevs_discovered": 2, 00:18:56.954 "num_base_bdevs_operational": 2, 00:18:56.954 "base_bdevs_list": [ 00:18:56.954 { 00:18:56.954 "name": "spare", 00:18:56.954 "uuid": "5a50d7ab-5aa2-598e-baa1-35281a461b5c", 00:18:56.954 "is_configured": true, 00:18:56.954 "data_offset": 2048, 00:18:56.954 "data_size": 63488 00:18:56.955 }, 00:18:56.955 { 00:18:56.955 "name": "BaseBdev2", 00:18:56.955 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:18:56.955 "is_configured": true, 00:18:56.955 "data_offset": 2048, 00:18:56.955 "data_size": 63488 00:18:56.955 } 00:18:56.955 ] 00:18:56.955 }' 00:18:56.955 09:18:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.955 09:18:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.532 "name": "raid_bdev1", 00:18:57.532 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:18:57.532 "strip_size_kb": 0, 00:18:57.532 "state": "online", 00:18:57.532 "raid_level": "raid1", 00:18:57.532 "superblock": true, 00:18:57.532 "num_base_bdevs": 2, 00:18:57.532 "num_base_bdevs_discovered": 2, 00:18:57.532 "num_base_bdevs_operational": 2, 00:18:57.532 "base_bdevs_list": [ 00:18:57.532 { 00:18:57.532 "name": "spare", 00:18:57.532 "uuid": "5a50d7ab-5aa2-598e-baa1-35281a461b5c", 00:18:57.532 "is_configured": true, 00:18:57.532 "data_offset": 2048, 00:18:57.532 "data_size": 63488 00:18:57.532 }, 00:18:57.532 { 00:18:57.532 "name": "BaseBdev2", 00:18:57.532 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:18:57.532 "is_configured": true, 00:18:57.532 "data_offset": 2048, 00:18:57.532 "data_size": 63488 00:18:57.532 } 00:18:57.532 ] 00:18:57.532 }' 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.532 09:18:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.533 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.533 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:57.533 09:18:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.533 09:18:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.533 [2024-10-15 09:18:41.459493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.791 "name": "raid_bdev1", 00:18:57.791 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:18:57.791 "strip_size_kb": 0, 00:18:57.791 "state": "online", 00:18:57.791 "raid_level": "raid1", 00:18:57.791 "superblock": true, 00:18:57.791 "num_base_bdevs": 2, 00:18:57.791 "num_base_bdevs_discovered": 1, 00:18:57.791 "num_base_bdevs_operational": 1, 00:18:57.791 "base_bdevs_list": [ 00:18:57.791 { 00:18:57.791 "name": null, 00:18:57.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.791 "is_configured": false, 00:18:57.791 "data_offset": 0, 00:18:57.791 "data_size": 63488 00:18:57.791 }, 00:18:57.791 { 00:18:57.791 "name": "BaseBdev2", 00:18:57.791 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:18:57.791 "is_configured": true, 00:18:57.791 "data_offset": 2048, 00:18:57.791 "data_size": 63488 00:18:57.791 } 00:18:57.791 ] 00:18:57.791 }' 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.791 09:18:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.358 09:18:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:58.358 09:18:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.358 09:18:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.358 [2024-10-15 09:18:42.031688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:58.358 [2024-10-15 09:18:42.031975] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:58.358 [2024-10-15 09:18:42.032004] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:58.358 [2024-10-15 09:18:42.032062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:58.358 [2024-10-15 09:18:42.048458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:18:58.358 09:18:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.358 09:18:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:58.358 [2024-10-15 09:18:42.051256] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:59.295 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.295 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.295 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:59.295 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:59.295 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.295 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.295 09:18:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.295 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.295 09:18:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.295 09:18:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.295 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.295 "name": "raid_bdev1", 00:18:59.295 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:18:59.295 "strip_size_kb": 0, 00:18:59.295 "state": "online", 00:18:59.295 "raid_level": "raid1", 00:18:59.295 "superblock": true, 00:18:59.295 "num_base_bdevs": 2, 00:18:59.295 "num_base_bdevs_discovered": 2, 00:18:59.295 "num_base_bdevs_operational": 2, 00:18:59.295 "process": { 00:18:59.295 "type": "rebuild", 00:18:59.295 "target": "spare", 00:18:59.295 "progress": { 00:18:59.295 "blocks": 20480, 00:18:59.295 "percent": 32 00:18:59.295 } 00:18:59.295 }, 00:18:59.295 "base_bdevs_list": [ 00:18:59.295 { 00:18:59.295 "name": "spare", 00:18:59.295 "uuid": "5a50d7ab-5aa2-598e-baa1-35281a461b5c", 00:18:59.295 "is_configured": true, 00:18:59.295 "data_offset": 2048, 00:18:59.295 "data_size": 63488 00:18:59.295 }, 00:18:59.295 { 00:18:59.295 "name": "BaseBdev2", 00:18:59.295 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:18:59.295 "is_configured": true, 00:18:59.295 "data_offset": 2048, 00:18:59.295 "data_size": 63488 00:18:59.295 } 00:18:59.295 ] 00:18:59.295 }' 00:18:59.295 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.295 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:59.295 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.295 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:59.295 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:59.295 09:18:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.295 09:18:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.295 [2024-10-15 09:18:43.220875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:59.554 [2024-10-15 09:18:43.262469] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:59.554 [2024-10-15 09:18:43.262605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.554 [2024-10-15 09:18:43.262632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:59.554 [2024-10-15 09:18:43.262649] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.554 "name": "raid_bdev1", 00:18:59.554 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:18:59.554 "strip_size_kb": 0, 00:18:59.554 "state": "online", 00:18:59.554 "raid_level": "raid1", 00:18:59.554 "superblock": true, 00:18:59.554 "num_base_bdevs": 2, 00:18:59.554 "num_base_bdevs_discovered": 1, 00:18:59.554 "num_base_bdevs_operational": 1, 00:18:59.554 "base_bdevs_list": [ 00:18:59.554 { 00:18:59.554 "name": null, 00:18:59.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.554 "is_configured": false, 00:18:59.554 "data_offset": 0, 00:18:59.554 "data_size": 63488 00:18:59.554 }, 00:18:59.554 { 00:18:59.554 "name": "BaseBdev2", 00:18:59.554 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:18:59.554 "is_configured": true, 00:18:59.554 "data_offset": 2048, 00:18:59.554 "data_size": 63488 00:18:59.554 } 00:18:59.554 ] 00:18:59.554 }' 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.554 09:18:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.122 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:00.122 09:18:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.122 09:18:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.122 [2024-10-15 09:18:43.828545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:00.122 [2024-10-15 09:18:43.828783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.122 [2024-10-15 09:18:43.828828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:00.122 [2024-10-15 09:18:43.828849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.122 [2024-10-15 09:18:43.829567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.122 [2024-10-15 09:18:43.829608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:00.122 [2024-10-15 09:18:43.829746] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:00.122 [2024-10-15 09:18:43.829774] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:00.122 [2024-10-15 09:18:43.829790] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:00.122 [2024-10-15 09:18:43.829831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:00.122 [2024-10-15 09:18:43.846320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:19:00.122 spare 00:19:00.122 09:18:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.122 09:18:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:00.122 [2024-10-15 09:18:43.849043] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:01.057 09:18:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:01.057 09:18:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.057 09:18:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:01.057 09:18:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:01.057 09:18:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.057 09:18:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.057 09:18:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.057 09:18:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.057 09:18:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.057 09:18:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.057 09:18:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.057 "name": "raid_bdev1", 00:19:01.057 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:19:01.057 "strip_size_kb": 0, 00:19:01.057 "state": "online", 00:19:01.057 "raid_level": "raid1", 00:19:01.057 "superblock": true, 00:19:01.057 "num_base_bdevs": 2, 00:19:01.057 "num_base_bdevs_discovered": 2, 00:19:01.057 "num_base_bdevs_operational": 2, 00:19:01.057 "process": { 00:19:01.057 "type": "rebuild", 00:19:01.057 "target": "spare", 00:19:01.057 "progress": { 00:19:01.057 "blocks": 20480, 00:19:01.057 "percent": 32 00:19:01.057 } 00:19:01.057 }, 00:19:01.057 "base_bdevs_list": [ 00:19:01.057 { 00:19:01.057 "name": "spare", 00:19:01.057 "uuid": "5a50d7ab-5aa2-598e-baa1-35281a461b5c", 00:19:01.057 "is_configured": true, 00:19:01.057 "data_offset": 2048, 00:19:01.057 "data_size": 63488 00:19:01.057 }, 00:19:01.057 { 00:19:01.057 "name": "BaseBdev2", 00:19:01.057 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:19:01.057 "is_configured": true, 00:19:01.057 "data_offset": 2048, 00:19:01.057 "data_size": 63488 00:19:01.057 } 00:19:01.057 ] 00:19:01.057 }' 00:19:01.057 09:18:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.057 09:18:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:01.057 09:18:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.316 [2024-10-15 09:18:45.018842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.316 [2024-10-15 09:18:45.060450] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:01.316 [2024-10-15 09:18:45.060552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.316 [2024-10-15 09:18:45.060584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.316 [2024-10-15 09:18:45.060597] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.316 "name": "raid_bdev1", 00:19:01.316 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:19:01.316 "strip_size_kb": 0, 00:19:01.316 "state": "online", 00:19:01.316 "raid_level": "raid1", 00:19:01.316 "superblock": true, 00:19:01.316 "num_base_bdevs": 2, 00:19:01.316 "num_base_bdevs_discovered": 1, 00:19:01.316 "num_base_bdevs_operational": 1, 00:19:01.316 "base_bdevs_list": [ 00:19:01.316 { 00:19:01.316 "name": null, 00:19:01.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.316 "is_configured": false, 00:19:01.316 "data_offset": 0, 00:19:01.316 "data_size": 63488 00:19:01.316 }, 00:19:01.316 { 00:19:01.316 "name": "BaseBdev2", 00:19:01.316 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:19:01.316 "is_configured": true, 00:19:01.316 "data_offset": 2048, 00:19:01.316 "data_size": 63488 00:19:01.316 } 00:19:01.316 ] 00:19:01.316 }' 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.316 09:18:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.884 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.884 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.884 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:01.884 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:01.884 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.884 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.885 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.885 09:18:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.885 09:18:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.885 09:18:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.885 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.885 "name": "raid_bdev1", 00:19:01.885 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:19:01.885 "strip_size_kb": 0, 00:19:01.885 "state": "online", 00:19:01.885 "raid_level": "raid1", 00:19:01.885 "superblock": true, 00:19:01.885 "num_base_bdevs": 2, 00:19:01.885 "num_base_bdevs_discovered": 1, 00:19:01.885 "num_base_bdevs_operational": 1, 00:19:01.885 "base_bdevs_list": [ 00:19:01.885 { 00:19:01.885 "name": null, 00:19:01.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.885 "is_configured": false, 00:19:01.885 "data_offset": 0, 00:19:01.885 "data_size": 63488 00:19:01.885 }, 00:19:01.885 { 00:19:01.885 "name": "BaseBdev2", 00:19:01.885 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:19:01.885 "is_configured": true, 00:19:01.885 "data_offset": 2048, 00:19:01.885 "data_size": 63488 00:19:01.885 } 00:19:01.885 ] 00:19:01.885 }' 00:19:01.885 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.885 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:01.885 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.885 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:01.885 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:01.885 09:18:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.885 09:18:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.885 09:18:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.885 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:01.885 09:18:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.885 09:18:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.885 [2024-10-15 09:18:45.794517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:01.885 [2024-10-15 09:18:45.794727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.885 [2024-10-15 09:18:45.794778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:01.885 [2024-10-15 09:18:45.794808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.885 [2024-10-15 09:18:45.795500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.885 [2024-10-15 09:18:45.795532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:01.885 [2024-10-15 09:18:45.795663] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:01.885 [2024-10-15 09:18:45.795687] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:01.885 [2024-10-15 09:18:45.795703] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:01.885 [2024-10-15 09:18:45.795718] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:01.885 BaseBdev1 00:19:01.885 09:18:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.885 09:18:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:03.263 09:18:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:03.263 09:18:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.263 09:18:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.263 09:18:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.263 09:18:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.263 09:18:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:03.263 09:18:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.263 09:18:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.263 09:18:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.263 09:18:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.263 09:18:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.263 09:18:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.263 09:18:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.263 09:18:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.263 09:18:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.263 09:18:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.263 "name": "raid_bdev1", 00:19:03.263 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:19:03.263 "strip_size_kb": 0, 00:19:03.263 "state": "online", 00:19:03.263 "raid_level": "raid1", 00:19:03.263 "superblock": true, 00:19:03.263 "num_base_bdevs": 2, 00:19:03.263 "num_base_bdevs_discovered": 1, 00:19:03.263 "num_base_bdevs_operational": 1, 00:19:03.263 "base_bdevs_list": [ 00:19:03.263 { 00:19:03.263 "name": null, 00:19:03.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.263 "is_configured": false, 00:19:03.263 "data_offset": 0, 00:19:03.263 "data_size": 63488 00:19:03.263 }, 00:19:03.263 { 00:19:03.263 "name": "BaseBdev2", 00:19:03.263 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:19:03.263 "is_configured": true, 00:19:03.263 "data_offset": 2048, 00:19:03.263 "data_size": 63488 00:19:03.263 } 00:19:03.263 ] 00:19:03.263 }' 00:19:03.263 09:18:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.263 09:18:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.522 09:18:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:03.522 09:18:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.522 09:18:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:03.522 09:18:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:03.522 09:18:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.522 09:18:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.522 09:18:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.522 09:18:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.522 09:18:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.522 09:18:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.522 09:18:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.522 "name": "raid_bdev1", 00:19:03.522 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:19:03.522 "strip_size_kb": 0, 00:19:03.522 "state": "online", 00:19:03.522 "raid_level": "raid1", 00:19:03.522 "superblock": true, 00:19:03.522 "num_base_bdevs": 2, 00:19:03.522 "num_base_bdevs_discovered": 1, 00:19:03.522 "num_base_bdevs_operational": 1, 00:19:03.522 "base_bdevs_list": [ 00:19:03.522 { 00:19:03.522 "name": null, 00:19:03.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.522 "is_configured": false, 00:19:03.522 "data_offset": 0, 00:19:03.522 "data_size": 63488 00:19:03.522 }, 00:19:03.522 { 00:19:03.522 "name": "BaseBdev2", 00:19:03.522 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:19:03.522 "is_configured": true, 00:19:03.522 "data_offset": 2048, 00:19:03.522 "data_size": 63488 00:19:03.522 } 00:19:03.522 ] 00:19:03.522 }' 00:19:03.522 09:18:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.522 09:18:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:03.522 09:18:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.781 09:18:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:03.781 09:18:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:03.781 09:18:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:19:03.781 09:18:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:03.781 09:18:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:03.781 09:18:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:03.781 09:18:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:03.781 09:18:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:03.781 09:18:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:03.781 09:18:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.781 09:18:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.781 [2024-10-15 09:18:47.491065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:03.781 [2024-10-15 09:18:47.491326] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:03.781 [2024-10-15 09:18:47.491353] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:03.781 request: 00:19:03.781 { 00:19:03.781 "base_bdev": "BaseBdev1", 00:19:03.781 "raid_bdev": "raid_bdev1", 00:19:03.781 "method": "bdev_raid_add_base_bdev", 00:19:03.781 "req_id": 1 00:19:03.781 } 00:19:03.781 Got JSON-RPC error response 00:19:03.781 response: 00:19:03.781 { 00:19:03.781 "code": -22, 00:19:03.781 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:03.781 } 00:19:03.781 09:18:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:03.781 09:18:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:19:03.781 09:18:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:03.781 09:18:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:03.781 09:18:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:03.781 09:18:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:04.716 09:18:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:04.716 09:18:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.716 09:18:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.716 09:18:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.716 09:18:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.717 09:18:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:04.717 09:18:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.717 09:18:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.717 09:18:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.717 09:18:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.717 09:18:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.717 09:18:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.717 09:18:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.717 09:18:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.717 09:18:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.717 09:18:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.717 "name": "raid_bdev1", 00:19:04.717 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:19:04.717 "strip_size_kb": 0, 00:19:04.717 "state": "online", 00:19:04.717 "raid_level": "raid1", 00:19:04.717 "superblock": true, 00:19:04.717 "num_base_bdevs": 2, 00:19:04.717 "num_base_bdevs_discovered": 1, 00:19:04.717 "num_base_bdevs_operational": 1, 00:19:04.717 "base_bdevs_list": [ 00:19:04.717 { 00:19:04.717 "name": null, 00:19:04.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.717 "is_configured": false, 00:19:04.717 "data_offset": 0, 00:19:04.717 "data_size": 63488 00:19:04.717 }, 00:19:04.717 { 00:19:04.717 "name": "BaseBdev2", 00:19:04.717 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:19:04.717 "is_configured": true, 00:19:04.717 "data_offset": 2048, 00:19:04.717 "data_size": 63488 00:19:04.717 } 00:19:04.717 ] 00:19:04.717 }' 00:19:04.717 09:18:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.717 09:18:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.288 "name": "raid_bdev1", 00:19:05.288 "uuid": "5731a1e2-7feb-4283-b154-356a61069447", 00:19:05.288 "strip_size_kb": 0, 00:19:05.288 "state": "online", 00:19:05.288 "raid_level": "raid1", 00:19:05.288 "superblock": true, 00:19:05.288 "num_base_bdevs": 2, 00:19:05.288 "num_base_bdevs_discovered": 1, 00:19:05.288 "num_base_bdevs_operational": 1, 00:19:05.288 "base_bdevs_list": [ 00:19:05.288 { 00:19:05.288 "name": null, 00:19:05.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.288 "is_configured": false, 00:19:05.288 "data_offset": 0, 00:19:05.288 "data_size": 63488 00:19:05.288 }, 00:19:05.288 { 00:19:05.288 "name": "BaseBdev2", 00:19:05.288 "uuid": "ba1d4884-ea75-5f99-859c-2dd4430234a2", 00:19:05.288 "is_configured": true, 00:19:05.288 "data_offset": 2048, 00:19:05.288 "data_size": 63488 00:19:05.288 } 00:19:05.288 ] 00:19:05.288 }' 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76179 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 76179 ']' 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 76179 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:05.288 09:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76179 00:19:05.548 09:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:05.548 killing process with pid 76179 00:19:05.548 09:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:05.548 09:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76179' 00:19:05.548 Received shutdown signal, test time was about 60.000000 seconds 00:19:05.548 00:19:05.548 Latency(us) 00:19:05.548 [2024-10-15T09:18:49.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.548 [2024-10-15T09:18:49.476Z] =================================================================================================================== 00:19:05.548 [2024-10-15T09:18:49.476Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:05.548 09:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 76179 00:19:05.548 [2024-10-15 09:18:49.241844] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:05.548 09:18:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 76179 00:19:05.548 [2024-10-15 09:18:49.242026] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.548 [2024-10-15 09:18:49.242107] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:05.548 [2024-10-15 09:18:49.242156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:05.806 [2024-10-15 09:18:49.534383] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:06.742 ************************************ 00:19:06.742 END TEST raid_rebuild_test_sb 00:19:06.742 ************************************ 00:19:06.742 09:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:06.742 00:19:06.742 real 0m27.596s 00:19:06.742 user 0m33.892s 00:19:06.742 sys 0m4.354s 00:19:06.742 09:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:06.742 09:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.000 09:18:50 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:19:07.000 09:18:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:07.000 09:18:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:07.000 09:18:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.000 ************************************ 00:19:07.000 START TEST raid_rebuild_test_io 00:19:07.000 ************************************ 00:19:07.000 09:18:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:19:07.000 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:07.000 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:07.000 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:07.000 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:19:07.000 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:07.000 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:07.000 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76945 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76945 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 76945 ']' 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:07.001 09:18:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:07.001 [2024-10-15 09:18:50.830105] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:19:07.001 [2024-10-15 09:18:50.830608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76945 ] 00:19:07.001 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:07.001 Zero copy mechanism will not be used. 00:19:07.259 [2024-10-15 09:18:51.009587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.259 [2024-10-15 09:18:51.157404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.518 [2024-10-15 09:18:51.383147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.518 [2024-10-15 09:18:51.383500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:08.085 09:18:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:08.085 09:18:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:19:08.085 09:18:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:08.085 09:18:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:08.085 09:18:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.085 09:18:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:08.085 BaseBdev1_malloc 00:19:08.085 09:18:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.085 09:18:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:08.085 09:18:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.085 09:18:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:08.085 [2024-10-15 09:18:51.923186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:08.085 [2024-10-15 09:18:51.923776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.085 [2024-10-15 09:18:51.924024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:08.085 [2024-10-15 09:18:51.924179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.085 [2024-10-15 09:18:51.927502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.085 [2024-10-15 09:18:51.927689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:08.086 BaseBdev1 00:19:08.086 09:18:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.086 09:18:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:08.086 09:18:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:08.086 09:18:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.086 09:18:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:08.086 BaseBdev2_malloc 00:19:08.086 09:18:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.086 09:18:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:08.086 09:18:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.086 09:18:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:08.086 [2024-10-15 09:18:51.980862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:08.086 [2024-10-15 09:18:51.981150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.086 [2024-10-15 09:18:51.981206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:08.086 [2024-10-15 09:18:51.981226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.086 [2024-10-15 09:18:51.984512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.086 [2024-10-15 09:18:51.984572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:08.086 BaseBdev2 00:19:08.086 09:18:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.086 09:18:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:08.086 09:18:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.086 09:18:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:08.345 spare_malloc 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:08.345 spare_delay 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:08.345 [2024-10-15 09:18:52.059496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:08.345 [2024-10-15 09:18:52.059586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.345 [2024-10-15 09:18:52.059631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:08.345 [2024-10-15 09:18:52.059650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.345 [2024-10-15 09:18:52.062824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.345 [2024-10-15 09:18:52.062880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:08.345 spare 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:08.345 [2024-10-15 09:18:52.071675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:08.345 [2024-10-15 09:18:52.074522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:08.345 [2024-10-15 09:18:52.074682] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:08.345 [2024-10-15 09:18:52.074703] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:08.345 [2024-10-15 09:18:52.075252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:08.345 [2024-10-15 09:18:52.075540] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:08.345 [2024-10-15 09:18:52.075595] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:08.345 [2024-10-15 09:18:52.076015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.345 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.345 "name": "raid_bdev1", 00:19:08.345 "uuid": "cdeeb134-7cdf-446d-ac24-b6a3a55e78c7", 00:19:08.345 "strip_size_kb": 0, 00:19:08.345 "state": "online", 00:19:08.345 "raid_level": "raid1", 00:19:08.345 "superblock": false, 00:19:08.345 "num_base_bdevs": 2, 00:19:08.345 "num_base_bdevs_discovered": 2, 00:19:08.345 "num_base_bdevs_operational": 2, 00:19:08.345 "base_bdevs_list": [ 00:19:08.345 { 00:19:08.345 "name": "BaseBdev1", 00:19:08.345 "uuid": "33405abc-1660-5bef-b0de-c15d3d3fcd8f", 00:19:08.345 "is_configured": true, 00:19:08.345 "data_offset": 0, 00:19:08.346 "data_size": 65536 00:19:08.346 }, 00:19:08.346 { 00:19:08.346 "name": "BaseBdev2", 00:19:08.346 "uuid": "25c8441f-cc72-52ef-a64d-8668ca474f1e", 00:19:08.346 "is_configured": true, 00:19:08.346 "data_offset": 0, 00:19:08.346 "data_size": 65536 00:19:08.346 } 00:19:08.346 ] 00:19:08.346 }' 00:19:08.346 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.346 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:08.914 [2024-10-15 09:18:52.620538] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:08.914 [2024-10-15 09:18:52.732177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.914 "name": "raid_bdev1", 00:19:08.914 "uuid": "cdeeb134-7cdf-446d-ac24-b6a3a55e78c7", 00:19:08.914 "strip_size_kb": 0, 00:19:08.914 "state": "online", 00:19:08.914 "raid_level": "raid1", 00:19:08.914 "superblock": false, 00:19:08.914 "num_base_bdevs": 2, 00:19:08.914 "num_base_bdevs_discovered": 1, 00:19:08.914 "num_base_bdevs_operational": 1, 00:19:08.914 "base_bdevs_list": [ 00:19:08.914 { 00:19:08.914 "name": null, 00:19:08.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.914 "is_configured": false, 00:19:08.914 "data_offset": 0, 00:19:08.914 "data_size": 65536 00:19:08.914 }, 00:19:08.914 { 00:19:08.914 "name": "BaseBdev2", 00:19:08.914 "uuid": "25c8441f-cc72-52ef-a64d-8668ca474f1e", 00:19:08.914 "is_configured": true, 00:19:08.914 "data_offset": 0, 00:19:08.914 "data_size": 65536 00:19:08.914 } 00:19:08.914 ] 00:19:08.914 }' 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.914 09:18:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:09.172 [2024-10-15 09:18:52.853495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:09.172 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:09.172 Zero copy mechanism will not be used. 00:19:09.172 Running I/O for 60 seconds... 00:19:09.430 09:18:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:09.430 09:18:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.430 09:18:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:09.430 [2024-10-15 09:18:53.268386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:09.430 09:18:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.430 09:18:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:09.430 [2024-10-15 09:18:53.338913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:09.430 [2024-10-15 09:18:53.341767] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:09.688 [2024-10-15 09:18:53.461408] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:09.688 [2024-10-15 09:18:53.462519] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:09.948 [2024-10-15 09:18:53.718729] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:09.948 [2024-10-15 09:18:53.719276] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:10.207 135.00 IOPS, 405.00 MiB/s [2024-10-15T09:18:54.135Z] [2024-10-15 09:18:54.092984] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:10.207 [2024-10-15 09:18:54.093863] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:10.466 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:10.466 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.466 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:10.466 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:10.466 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.466 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.466 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.466 09:18:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.466 09:18:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:10.466 09:18:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.466 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.466 "name": "raid_bdev1", 00:19:10.466 "uuid": "cdeeb134-7cdf-446d-ac24-b6a3a55e78c7", 00:19:10.466 "strip_size_kb": 0, 00:19:10.466 "state": "online", 00:19:10.466 "raid_level": "raid1", 00:19:10.466 "superblock": false, 00:19:10.466 "num_base_bdevs": 2, 00:19:10.466 "num_base_bdevs_discovered": 2, 00:19:10.466 "num_base_bdevs_operational": 2, 00:19:10.466 "process": { 00:19:10.466 "type": "rebuild", 00:19:10.466 "target": "spare", 00:19:10.466 "progress": { 00:19:10.466 "blocks": 12288, 00:19:10.466 "percent": 18 00:19:10.466 } 00:19:10.466 }, 00:19:10.466 "base_bdevs_list": [ 00:19:10.466 { 00:19:10.466 "name": "spare", 00:19:10.466 "uuid": "a0ec32fb-b710-52d6-bf81-9a097c608ac9", 00:19:10.466 "is_configured": true, 00:19:10.466 "data_offset": 0, 00:19:10.466 "data_size": 65536 00:19:10.466 }, 00:19:10.466 { 00:19:10.466 "name": "BaseBdev2", 00:19:10.466 "uuid": "25c8441f-cc72-52ef-a64d-8668ca474f1e", 00:19:10.466 "is_configured": true, 00:19:10.466 "data_offset": 0, 00:19:10.466 "data_size": 65536 00:19:10.466 } 00:19:10.466 ] 00:19:10.466 }' 00:19:10.467 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.727 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:10.727 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.727 [2024-10-15 09:18:54.465435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:10.727 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:10.727 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:10.727 09:18:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.727 09:18:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:10.727 [2024-10-15 09:18:54.483335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:10.727 [2024-10-15 09:18:54.600348] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:10.727 [2024-10-15 09:18:54.601194] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:10.986 [2024-10-15 09:18:54.710657] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:10.986 [2024-10-15 09:18:54.721420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:10.986 [2024-10-15 09:18:54.721698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:10.986 [2024-10-15 09:18:54.721728] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:10.986 [2024-10-15 09:18:54.775679] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.986 "name": "raid_bdev1", 00:19:10.986 "uuid": "cdeeb134-7cdf-446d-ac24-b6a3a55e78c7", 00:19:10.986 "strip_size_kb": 0, 00:19:10.986 "state": "online", 00:19:10.986 "raid_level": "raid1", 00:19:10.986 "superblock": false, 00:19:10.986 "num_base_bdevs": 2, 00:19:10.986 "num_base_bdevs_discovered": 1, 00:19:10.986 "num_base_bdevs_operational": 1, 00:19:10.986 "base_bdevs_list": [ 00:19:10.986 { 00:19:10.986 "name": null, 00:19:10.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.986 "is_configured": false, 00:19:10.986 "data_offset": 0, 00:19:10.986 "data_size": 65536 00:19:10.986 }, 00:19:10.986 { 00:19:10.986 "name": "BaseBdev2", 00:19:10.986 "uuid": "25c8441f-cc72-52ef-a64d-8668ca474f1e", 00:19:10.986 "is_configured": true, 00:19:10.986 "data_offset": 0, 00:19:10.986 "data_size": 65536 00:19:10.986 } 00:19:10.986 ] 00:19:10.986 }' 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.986 09:18:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:11.553 120.50 IOPS, 361.50 MiB/s [2024-10-15T09:18:55.481Z] 09:18:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:11.553 09:18:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.553 09:18:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:11.553 09:18:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:11.553 09:18:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.553 09:18:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.553 09:18:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.553 09:18:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.553 09:18:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:11.553 09:18:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.553 09:18:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.553 "name": "raid_bdev1", 00:19:11.553 "uuid": "cdeeb134-7cdf-446d-ac24-b6a3a55e78c7", 00:19:11.553 "strip_size_kb": 0, 00:19:11.553 "state": "online", 00:19:11.553 "raid_level": "raid1", 00:19:11.553 "superblock": false, 00:19:11.553 "num_base_bdevs": 2, 00:19:11.553 "num_base_bdevs_discovered": 1, 00:19:11.553 "num_base_bdevs_operational": 1, 00:19:11.553 "base_bdevs_list": [ 00:19:11.553 { 00:19:11.553 "name": null, 00:19:11.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.553 "is_configured": false, 00:19:11.553 "data_offset": 0, 00:19:11.553 "data_size": 65536 00:19:11.553 }, 00:19:11.553 { 00:19:11.553 "name": "BaseBdev2", 00:19:11.553 "uuid": "25c8441f-cc72-52ef-a64d-8668ca474f1e", 00:19:11.553 "is_configured": true, 00:19:11.553 "data_offset": 0, 00:19:11.554 "data_size": 65536 00:19:11.554 } 00:19:11.554 ] 00:19:11.554 }' 00:19:11.554 09:18:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.554 09:18:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:11.554 09:18:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.554 09:18:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:11.554 09:18:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:11.554 09:18:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.554 09:18:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:11.813 [2024-10-15 09:18:55.480939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:11.813 09:18:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.813 09:18:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:11.813 [2024-10-15 09:18:55.531497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:11.813 [2024-10-15 09:18:55.534174] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:11.813 [2024-10-15 09:18:55.670982] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:11.813 [2024-10-15 09:18:55.671839] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:12.071 [2024-10-15 09:18:55.808782] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:12.330 144.67 IOPS, 434.00 MiB/s [2024-10-15T09:18:56.258Z] [2024-10-15 09:18:56.152304] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:12.590 [2024-10-15 09:18:56.373797] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:12.590 [2024-10-15 09:18:56.374330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:12.848 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.848 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.848 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:12.848 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:12.848 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.848 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.848 09:18:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.848 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.848 09:18:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:12.848 09:18:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.848 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.848 "name": "raid_bdev1", 00:19:12.848 "uuid": "cdeeb134-7cdf-446d-ac24-b6a3a55e78c7", 00:19:12.848 "strip_size_kb": 0, 00:19:12.848 "state": "online", 00:19:12.848 "raid_level": "raid1", 00:19:12.848 "superblock": false, 00:19:12.848 "num_base_bdevs": 2, 00:19:12.848 "num_base_bdevs_discovered": 2, 00:19:12.848 "num_base_bdevs_operational": 2, 00:19:12.848 "process": { 00:19:12.848 "type": "rebuild", 00:19:12.848 "target": "spare", 00:19:12.848 "progress": { 00:19:12.848 "blocks": 10240, 00:19:12.848 "percent": 15 00:19:12.848 } 00:19:12.848 }, 00:19:12.848 "base_bdevs_list": [ 00:19:12.848 { 00:19:12.848 "name": "spare", 00:19:12.848 "uuid": "a0ec32fb-b710-52d6-bf81-9a097c608ac9", 00:19:12.848 "is_configured": true, 00:19:12.849 "data_offset": 0, 00:19:12.849 "data_size": 65536 00:19:12.849 }, 00:19:12.849 { 00:19:12.849 "name": "BaseBdev2", 00:19:12.849 "uuid": "25c8441f-cc72-52ef-a64d-8668ca474f1e", 00:19:12.849 "is_configured": true, 00:19:12.849 "data_offset": 0, 00:19:12.849 "data_size": 65536 00:19:12.849 } 00:19:12.849 ] 00:19:12.849 }' 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=447 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.849 "name": "raid_bdev1", 00:19:12.849 "uuid": "cdeeb134-7cdf-446d-ac24-b6a3a55e78c7", 00:19:12.849 "strip_size_kb": 0, 00:19:12.849 "state": "online", 00:19:12.849 "raid_level": "raid1", 00:19:12.849 "superblock": false, 00:19:12.849 "num_base_bdevs": 2, 00:19:12.849 "num_base_bdevs_discovered": 2, 00:19:12.849 "num_base_bdevs_operational": 2, 00:19:12.849 "process": { 00:19:12.849 "type": "rebuild", 00:19:12.849 "target": "spare", 00:19:12.849 "progress": { 00:19:12.849 "blocks": 12288, 00:19:12.849 "percent": 18 00:19:12.849 } 00:19:12.849 }, 00:19:12.849 "base_bdevs_list": [ 00:19:12.849 { 00:19:12.849 "name": "spare", 00:19:12.849 "uuid": "a0ec32fb-b710-52d6-bf81-9a097c608ac9", 00:19:12.849 "is_configured": true, 00:19:12.849 "data_offset": 0, 00:19:12.849 "data_size": 65536 00:19:12.849 }, 00:19:12.849 { 00:19:12.849 "name": "BaseBdev2", 00:19:12.849 "uuid": "25c8441f-cc72-52ef-a64d-8668ca474f1e", 00:19:12.849 "is_configured": true, 00:19:12.849 "data_offset": 0, 00:19:12.849 "data_size": 65536 00:19:12.849 } 00:19:12.849 ] 00:19:12.849 }' 00:19:12.849 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.107 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.107 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.107 [2024-10-15 09:18:56.818665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:13.107 [2024-10-15 09:18:56.819138] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:13.107 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.107 09:18:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:13.366 127.75 IOPS, 383.25 MiB/s [2024-10-15T09:18:57.294Z] [2024-10-15 09:18:57.144956] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:13.366 [2024-10-15 09:18:57.275304] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:13.933 [2024-10-15 09:18:57.759990] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:13.933 09:18:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:13.933 09:18:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.933 09:18:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.933 09:18:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.933 09:18:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.933 09:18:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.191 09:18:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.191 09:18:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.191 09:18:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:14.191 09:18:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.191 114.20 IOPS, 342.60 MiB/s [2024-10-15T09:18:58.119Z] 09:18:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.191 09:18:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.191 "name": "raid_bdev1", 00:19:14.191 "uuid": "cdeeb134-7cdf-446d-ac24-b6a3a55e78c7", 00:19:14.191 "strip_size_kb": 0, 00:19:14.191 "state": "online", 00:19:14.191 "raid_level": "raid1", 00:19:14.191 "superblock": false, 00:19:14.191 "num_base_bdevs": 2, 00:19:14.191 "num_base_bdevs_discovered": 2, 00:19:14.191 "num_base_bdevs_operational": 2, 00:19:14.191 "process": { 00:19:14.191 "type": "rebuild", 00:19:14.191 "target": "spare", 00:19:14.191 "progress": { 00:19:14.191 "blocks": 28672, 00:19:14.191 "percent": 43 00:19:14.191 } 00:19:14.191 }, 00:19:14.191 "base_bdevs_list": [ 00:19:14.191 { 00:19:14.191 "name": "spare", 00:19:14.191 "uuid": "a0ec32fb-b710-52d6-bf81-9a097c608ac9", 00:19:14.191 "is_configured": true, 00:19:14.191 "data_offset": 0, 00:19:14.191 "data_size": 65536 00:19:14.191 }, 00:19:14.191 { 00:19:14.191 "name": "BaseBdev2", 00:19:14.191 "uuid": "25c8441f-cc72-52ef-a64d-8668ca474f1e", 00:19:14.191 "is_configured": true, 00:19:14.191 "data_offset": 0, 00:19:14.191 "data_size": 65536 00:19:14.191 } 00:19:14.191 ] 00:19:14.191 }' 00:19:14.191 09:18:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.191 09:18:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:14.191 09:18:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.191 [2024-10-15 09:18:57.988138] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:14.191 09:18:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:14.191 09:18:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:14.191 [2024-10-15 09:18:58.117619] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:15.125 [2024-10-15 09:18:58.783458] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:19:15.125 104.83 IOPS, 314.50 MiB/s [2024-10-15T09:18:59.054Z] [2024-10-15 09:18:59.003031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:15.126 09:18:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.126 09:18:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.126 09:18:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.126 09:18:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.126 09:18:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.126 09:18:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.126 09:18:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.126 09:18:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.126 09:18:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.126 09:18:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:15.385 09:18:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.385 09:18:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.385 "name": "raid_bdev1", 00:19:15.385 "uuid": "cdeeb134-7cdf-446d-ac24-b6a3a55e78c7", 00:19:15.385 "strip_size_kb": 0, 00:19:15.385 "state": "online", 00:19:15.385 "raid_level": "raid1", 00:19:15.385 "superblock": false, 00:19:15.385 "num_base_bdevs": 2, 00:19:15.385 "num_base_bdevs_discovered": 2, 00:19:15.385 "num_base_bdevs_operational": 2, 00:19:15.385 "process": { 00:19:15.385 "type": "rebuild", 00:19:15.385 "target": "spare", 00:19:15.385 "progress": { 00:19:15.385 "blocks": 47104, 00:19:15.385 "percent": 71 00:19:15.385 } 00:19:15.385 }, 00:19:15.385 "base_bdevs_list": [ 00:19:15.385 { 00:19:15.385 "name": "spare", 00:19:15.385 "uuid": "a0ec32fb-b710-52d6-bf81-9a097c608ac9", 00:19:15.385 "is_configured": true, 00:19:15.385 "data_offset": 0, 00:19:15.385 "data_size": 65536 00:19:15.385 }, 00:19:15.385 { 00:19:15.385 "name": "BaseBdev2", 00:19:15.385 "uuid": "25c8441f-cc72-52ef-a64d-8668ca474f1e", 00:19:15.385 "is_configured": true, 00:19:15.385 "data_offset": 0, 00:19:15.385 "data_size": 65536 00:19:15.385 } 00:19:15.385 ] 00:19:15.385 }' 00:19:15.385 09:18:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.385 09:18:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.385 09:18:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.385 09:18:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.385 09:18:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:15.643 [2024-10-15 09:18:59.337527] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:19:16.211 94.86 IOPS, 284.57 MiB/s [2024-10-15T09:19:00.139Z] [2024-10-15 09:19:00.132351] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:16.470 09:19:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:16.470 09:19:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.470 09:19:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.470 09:19:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:16.470 09:19:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:16.470 09:19:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.470 09:19:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.470 09:19:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.470 09:19:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:16.470 09:19:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.470 09:19:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.470 [2024-10-15 09:19:00.240284] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:16.470 [2024-10-15 09:19:00.243664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.470 09:19:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.470 "name": "raid_bdev1", 00:19:16.470 "uuid": "cdeeb134-7cdf-446d-ac24-b6a3a55e78c7", 00:19:16.470 "strip_size_kb": 0, 00:19:16.470 "state": "online", 00:19:16.470 "raid_level": "raid1", 00:19:16.470 "superblock": false, 00:19:16.470 "num_base_bdevs": 2, 00:19:16.470 "num_base_bdevs_discovered": 2, 00:19:16.470 "num_base_bdevs_operational": 2, 00:19:16.470 "process": { 00:19:16.470 "type": "rebuild", 00:19:16.470 "target": "spare", 00:19:16.470 "progress": { 00:19:16.470 "blocks": 65536, 00:19:16.470 "percent": 100 00:19:16.470 } 00:19:16.470 }, 00:19:16.470 "base_bdevs_list": [ 00:19:16.470 { 00:19:16.470 "name": "spare", 00:19:16.470 "uuid": "a0ec32fb-b710-52d6-bf81-9a097c608ac9", 00:19:16.470 "is_configured": true, 00:19:16.470 "data_offset": 0, 00:19:16.470 "data_size": 65536 00:19:16.470 }, 00:19:16.470 { 00:19:16.470 "name": "BaseBdev2", 00:19:16.470 "uuid": "25c8441f-cc72-52ef-a64d-8668ca474f1e", 00:19:16.470 "is_configured": true, 00:19:16.470 "data_offset": 0, 00:19:16.470 "data_size": 65536 00:19:16.470 } 00:19:16.470 ] 00:19:16.470 }' 00:19:16.470 09:19:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.470 09:19:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.470 09:19:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.470 09:19:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.470 09:19:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:17.605 87.25 IOPS, 261.75 MiB/s [2024-10-15T09:19:01.533Z] 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.605 "name": "raid_bdev1", 00:19:17.605 "uuid": "cdeeb134-7cdf-446d-ac24-b6a3a55e78c7", 00:19:17.605 "strip_size_kb": 0, 00:19:17.605 "state": "online", 00:19:17.605 "raid_level": "raid1", 00:19:17.605 "superblock": false, 00:19:17.605 "num_base_bdevs": 2, 00:19:17.605 "num_base_bdevs_discovered": 2, 00:19:17.605 "num_base_bdevs_operational": 2, 00:19:17.605 "base_bdevs_list": [ 00:19:17.605 { 00:19:17.605 "name": "spare", 00:19:17.605 "uuid": "a0ec32fb-b710-52d6-bf81-9a097c608ac9", 00:19:17.605 "is_configured": true, 00:19:17.605 "data_offset": 0, 00:19:17.605 "data_size": 65536 00:19:17.605 }, 00:19:17.605 { 00:19:17.605 "name": "BaseBdev2", 00:19:17.605 "uuid": "25c8441f-cc72-52ef-a64d-8668ca474f1e", 00:19:17.605 "is_configured": true, 00:19:17.605 "data_offset": 0, 00:19:17.605 "data_size": 65536 00:19:17.605 } 00:19:17.605 ] 00:19:17.605 }' 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.605 09:19:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.865 "name": "raid_bdev1", 00:19:17.865 "uuid": "cdeeb134-7cdf-446d-ac24-b6a3a55e78c7", 00:19:17.865 "strip_size_kb": 0, 00:19:17.865 "state": "online", 00:19:17.865 "raid_level": "raid1", 00:19:17.865 "superblock": false, 00:19:17.865 "num_base_bdevs": 2, 00:19:17.865 "num_base_bdevs_discovered": 2, 00:19:17.865 "num_base_bdevs_operational": 2, 00:19:17.865 "base_bdevs_list": [ 00:19:17.865 { 00:19:17.865 "name": "spare", 00:19:17.865 "uuid": "a0ec32fb-b710-52d6-bf81-9a097c608ac9", 00:19:17.865 "is_configured": true, 00:19:17.865 "data_offset": 0, 00:19:17.865 "data_size": 65536 00:19:17.865 }, 00:19:17.865 { 00:19:17.865 "name": "BaseBdev2", 00:19:17.865 "uuid": "25c8441f-cc72-52ef-a64d-8668ca474f1e", 00:19:17.865 "is_configured": true, 00:19:17.865 "data_offset": 0, 00:19:17.865 "data_size": 65536 00:19:17.865 } 00:19:17.865 ] 00:19:17.865 }' 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.865 "name": "raid_bdev1", 00:19:17.865 "uuid": "cdeeb134-7cdf-446d-ac24-b6a3a55e78c7", 00:19:17.865 "strip_size_kb": 0, 00:19:17.865 "state": "online", 00:19:17.865 "raid_level": "raid1", 00:19:17.865 "superblock": false, 00:19:17.865 "num_base_bdevs": 2, 00:19:17.865 "num_base_bdevs_discovered": 2, 00:19:17.865 "num_base_bdevs_operational": 2, 00:19:17.865 "base_bdevs_list": [ 00:19:17.865 { 00:19:17.865 "name": "spare", 00:19:17.865 "uuid": "a0ec32fb-b710-52d6-bf81-9a097c608ac9", 00:19:17.865 "is_configured": true, 00:19:17.865 "data_offset": 0, 00:19:17.865 "data_size": 65536 00:19:17.865 }, 00:19:17.865 { 00:19:17.865 "name": "BaseBdev2", 00:19:17.865 "uuid": "25c8441f-cc72-52ef-a64d-8668ca474f1e", 00:19:17.865 "is_configured": true, 00:19:17.865 "data_offset": 0, 00:19:17.865 "data_size": 65536 00:19:17.865 } 00:19:17.865 ] 00:19:17.865 }' 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.865 09:19:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:18.383 83.33 IOPS, 250.00 MiB/s [2024-10-15T09:19:02.311Z] 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:18.383 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.383 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:18.383 [2024-10-15 09:19:02.200110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:18.383 [2024-10-15 09:19:02.200308] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:18.383 00:19:18.383 Latency(us) 00:19:18.383 [2024-10-15T09:19:02.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.383 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:18.383 raid_bdev1 : 9.42 81.25 243.75 0.00 0.00 16988.95 286.72 119632.99 00:19:18.383 [2024-10-15T09:19:02.311Z] =================================================================================================================== 00:19:18.383 [2024-10-15T09:19:02.311Z] Total : 81.25 243.75 0.00 0.00 16988.95 286.72 119632.99 00:19:18.383 [2024-10-15 09:19:02.293019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.383 [2024-10-15 09:19:02.293404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:18.383 { 00:19:18.383 "results": [ 00:19:18.383 { 00:19:18.383 "job": "raid_bdev1", 00:19:18.383 "core_mask": "0x1", 00:19:18.383 "workload": "randrw", 00:19:18.383 "percentage": 50, 00:19:18.383 "status": "finished", 00:19:18.383 "queue_depth": 2, 00:19:18.383 "io_size": 3145728, 00:19:18.383 "runtime": 9.415348, 00:19:18.383 "iops": 81.25031597345101, 00:19:18.383 "mibps": 243.75094792035304, 00:19:18.383 "io_failed": 0, 00:19:18.383 "io_timeout": 0, 00:19:18.383 "avg_latency_us": 16988.948534759358, 00:19:18.383 "min_latency_us": 286.72, 00:19:18.383 "max_latency_us": 119632.98909090909 00:19:18.383 } 00:19:18.383 ], 00:19:18.383 "core_count": 1 00:19:18.383 } 00:19:18.383 [2024-10-15 09:19:02.293583] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:18.383 [2024-10-15 09:19:02.293609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:18.383 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.383 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.383 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:19:18.383 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.383 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:18.660 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.660 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:18.660 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:18.660 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:19:18.660 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:19:18.660 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:18.660 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:18.660 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:18.660 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:18.660 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:18.660 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:18.660 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:18.660 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:18.660 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:19:18.919 /dev/nbd0 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:18.919 1+0 records in 00:19:18.919 1+0 records out 00:19:18.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040741 s, 10.1 MB/s 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:18.919 09:19:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:19.178 /dev/nbd1 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:19.178 1+0 records in 00:19:19.178 1+0 records out 00:19:19.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520657 s, 7.9 MB/s 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:19.178 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:19.436 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:19.436 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:19.436 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:19.436 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:19.436 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:19.436 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:19.436 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:19.695 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:19.695 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:19.695 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:19.695 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:19.695 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:19.695 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:19.695 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:19.695 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:19.695 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:19.695 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:19.695 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:19.695 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:19.695 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:19.695 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:19.695 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76945 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 76945 ']' 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 76945 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76945 00:19:19.954 killing process with pid 76945 00:19:19.954 Received shutdown signal, test time was about 10.963780 seconds 00:19:19.954 00:19:19.954 Latency(us) 00:19:19.954 [2024-10-15T09:19:03.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.954 [2024-10-15T09:19:03.882Z] =================================================================================================================== 00:19:19.954 [2024-10-15T09:19:03.882Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76945' 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 76945 00:19:19.954 [2024-10-15 09:19:03.820476] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:19.954 09:19:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 76945 00:19:20.213 [2024-10-15 09:19:04.044851] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:19:21.590 00:19:21.590 real 0m14.519s 00:19:21.590 user 0m18.666s 00:19:21.590 sys 0m1.568s 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:21.590 ************************************ 00:19:21.590 END TEST raid_rebuild_test_io 00:19:21.590 ************************************ 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.590 09:19:05 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:19:21.590 09:19:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:21.590 09:19:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:21.590 09:19:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:21.590 ************************************ 00:19:21.590 START TEST raid_rebuild_test_sb_io 00:19:21.590 ************************************ 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77357 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77357 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 77357 ']' 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:21.590 09:19:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.590 [2024-10-15 09:19:05.403249] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:19:21.590 [2024-10-15 09:19:05.403733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77357 ] 00:19:21.590 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:21.590 Zero copy mechanism will not be used. 00:19:21.849 [2024-10-15 09:19:05.582634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.849 [2024-10-15 09:19:05.730533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.118 [2024-10-15 09:19:05.954929] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:22.118 [2024-10-15 09:19:05.955271] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:22.712 BaseBdev1_malloc 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:22.712 [2024-10-15 09:19:06.460499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:22.712 [2024-10-15 09:19:06.460719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.712 [2024-10-15 09:19:06.460801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:22.712 [2024-10-15 09:19:06.460982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.712 [2024-10-15 09:19:06.464060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.712 [2024-10-15 09:19:06.464246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:22.712 BaseBdev1 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:22.712 BaseBdev2_malloc 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:22.712 [2024-10-15 09:19:06.516088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:22.712 [2024-10-15 09:19:06.516310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.712 [2024-10-15 09:19:06.516387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:22.712 [2024-10-15 09:19:06.516655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.712 [2024-10-15 09:19:06.519570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.712 [2024-10-15 09:19:06.519729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:22.712 BaseBdev2 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:22.712 spare_malloc 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:22.712 spare_delay 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:22.712 [2024-10-15 09:19:06.594037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:22.712 [2024-10-15 09:19:06.594283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.712 [2024-10-15 09:19:06.594361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:22.712 [2024-10-15 09:19:06.594489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.712 [2024-10-15 09:19:06.597500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.712 [2024-10-15 09:19:06.597664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:22.712 spare 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:22.712 [2024-10-15 09:19:06.602108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:22.712 [2024-10-15 09:19:06.604667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:22.712 [2024-10-15 09:19:06.604897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:22.712 [2024-10-15 09:19:06.604923] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:22.712 [2024-10-15 09:19:06.605304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:22.712 [2024-10-15 09:19:06.605542] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:22.712 [2024-10-15 09:19:06.605561] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:22.712 [2024-10-15 09:19:06.605746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.712 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.713 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.713 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:22.713 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.713 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.713 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.713 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.713 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.713 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.713 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.713 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:22.713 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.971 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.971 "name": "raid_bdev1", 00:19:22.971 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:22.971 "strip_size_kb": 0, 00:19:22.971 "state": "online", 00:19:22.971 "raid_level": "raid1", 00:19:22.971 "superblock": true, 00:19:22.971 "num_base_bdevs": 2, 00:19:22.971 "num_base_bdevs_discovered": 2, 00:19:22.971 "num_base_bdevs_operational": 2, 00:19:22.971 "base_bdevs_list": [ 00:19:22.971 { 00:19:22.971 "name": "BaseBdev1", 00:19:22.971 "uuid": "c0b4cdf0-9e45-5274-8dfd-84fd75b8dd29", 00:19:22.971 "is_configured": true, 00:19:22.971 "data_offset": 2048, 00:19:22.971 "data_size": 63488 00:19:22.971 }, 00:19:22.971 { 00:19:22.971 "name": "BaseBdev2", 00:19:22.971 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:22.971 "is_configured": true, 00:19:22.971 "data_offset": 2048, 00:19:22.971 "data_size": 63488 00:19:22.971 } 00:19:22.971 ] 00:19:22.971 }' 00:19:22.971 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.971 09:19:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:23.229 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:23.229 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:23.229 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.229 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:23.229 [2024-10-15 09:19:07.138660] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:23.229 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:23.489 [2024-10-15 09:19:07.238308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.489 "name": "raid_bdev1", 00:19:23.489 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:23.489 "strip_size_kb": 0, 00:19:23.489 "state": "online", 00:19:23.489 "raid_level": "raid1", 00:19:23.489 "superblock": true, 00:19:23.489 "num_base_bdevs": 2, 00:19:23.489 "num_base_bdevs_discovered": 1, 00:19:23.489 "num_base_bdevs_operational": 1, 00:19:23.489 "base_bdevs_list": [ 00:19:23.489 { 00:19:23.489 "name": null, 00:19:23.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.489 "is_configured": false, 00:19:23.489 "data_offset": 0, 00:19:23.489 "data_size": 63488 00:19:23.489 }, 00:19:23.489 { 00:19:23.489 "name": "BaseBdev2", 00:19:23.489 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:23.489 "is_configured": true, 00:19:23.489 "data_offset": 2048, 00:19:23.489 "data_size": 63488 00:19:23.489 } 00:19:23.489 ] 00:19:23.489 }' 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.489 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:23.489 [2024-10-15 09:19:07.363175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:23.489 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:23.489 Zero copy mechanism will not be used. 00:19:23.489 Running I/O for 60 seconds... 00:19:24.056 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:24.056 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.056 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:24.056 [2024-10-15 09:19:07.772297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:24.056 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.056 09:19:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:24.056 [2024-10-15 09:19:07.845467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:24.056 [2024-10-15 09:19:07.848422] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:24.056 [2024-10-15 09:19:07.966686] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:24.056 [2024-10-15 09:19:07.967615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:24.314 [2024-10-15 09:19:08.181093] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:24.314 [2024-10-15 09:19:08.181620] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:24.574 160.00 IOPS, 480.00 MiB/s [2024-10-15T09:19:08.502Z] [2024-10-15 09:19:08.434879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:24.831 [2024-10-15 09:19:08.682341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:24.831 [2024-10-15 09:19:08.682852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:25.090 09:19:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:25.090 09:19:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.090 09:19:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:25.090 09:19:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:25.090 09:19:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.090 09:19:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.090 09:19:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.090 09:19:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.090 09:19:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:25.090 09:19:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.090 09:19:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.090 "name": "raid_bdev1", 00:19:25.090 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:25.090 "strip_size_kb": 0, 00:19:25.090 "state": "online", 00:19:25.090 "raid_level": "raid1", 00:19:25.090 "superblock": true, 00:19:25.090 "num_base_bdevs": 2, 00:19:25.090 "num_base_bdevs_discovered": 2, 00:19:25.090 "num_base_bdevs_operational": 2, 00:19:25.090 "process": { 00:19:25.090 "type": "rebuild", 00:19:25.090 "target": "spare", 00:19:25.090 "progress": { 00:19:25.090 "blocks": 10240, 00:19:25.090 "percent": 16 00:19:25.090 } 00:19:25.090 }, 00:19:25.090 "base_bdevs_list": [ 00:19:25.090 { 00:19:25.090 "name": "spare", 00:19:25.090 "uuid": "be580054-ee42-5bb5-aa04-a031a64d8f6b", 00:19:25.090 "is_configured": true, 00:19:25.090 "data_offset": 2048, 00:19:25.090 "data_size": 63488 00:19:25.090 }, 00:19:25.090 { 00:19:25.090 "name": "BaseBdev2", 00:19:25.090 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:25.090 "is_configured": true, 00:19:25.090 "data_offset": 2048, 00:19:25.090 "data_size": 63488 00:19:25.090 } 00:19:25.090 ] 00:19:25.090 }' 00:19:25.090 09:19:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.090 09:19:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:25.090 09:19:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.090 09:19:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.090 09:19:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:25.090 09:19:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.090 09:19:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:25.090 [2024-10-15 09:19:08.980003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:25.348 [2024-10-15 09:19:09.076641] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:25.348 [2024-10-15 09:19:09.186703] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:25.348 [2024-10-15 09:19:09.190939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.348 [2024-10-15 09:19:09.190992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:25.348 [2024-10-15 09:19:09.191014] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:25.348 [2024-10-15 09:19:09.243111] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:19:25.348 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.348 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:25.348 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.348 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.348 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.348 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.348 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:25.348 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.348 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.348 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.348 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.348 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.348 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.348 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.348 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:25.607 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.607 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.607 "name": "raid_bdev1", 00:19:25.607 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:25.607 "strip_size_kb": 0, 00:19:25.607 "state": "online", 00:19:25.607 "raid_level": "raid1", 00:19:25.607 "superblock": true, 00:19:25.607 "num_base_bdevs": 2, 00:19:25.607 "num_base_bdevs_discovered": 1, 00:19:25.607 "num_base_bdevs_operational": 1, 00:19:25.607 "base_bdevs_list": [ 00:19:25.607 { 00:19:25.607 "name": null, 00:19:25.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.607 "is_configured": false, 00:19:25.607 "data_offset": 0, 00:19:25.607 "data_size": 63488 00:19:25.607 }, 00:19:25.607 { 00:19:25.607 "name": "BaseBdev2", 00:19:25.607 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:25.607 "is_configured": true, 00:19:25.607 "data_offset": 2048, 00:19:25.607 "data_size": 63488 00:19:25.607 } 00:19:25.607 ] 00:19:25.607 }' 00:19:25.607 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.607 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:25.864 128.50 IOPS, 385.50 MiB/s [2024-10-15T09:19:09.792Z] 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:25.864 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.864 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:25.864 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:25.864 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.864 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.864 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.864 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.864 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:26.122 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.122 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.122 "name": "raid_bdev1", 00:19:26.122 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:26.122 "strip_size_kb": 0, 00:19:26.122 "state": "online", 00:19:26.122 "raid_level": "raid1", 00:19:26.122 "superblock": true, 00:19:26.122 "num_base_bdevs": 2, 00:19:26.122 "num_base_bdevs_discovered": 1, 00:19:26.122 "num_base_bdevs_operational": 1, 00:19:26.122 "base_bdevs_list": [ 00:19:26.122 { 00:19:26.122 "name": null, 00:19:26.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.122 "is_configured": false, 00:19:26.122 "data_offset": 0, 00:19:26.122 "data_size": 63488 00:19:26.122 }, 00:19:26.122 { 00:19:26.122 "name": "BaseBdev2", 00:19:26.122 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:26.122 "is_configured": true, 00:19:26.122 "data_offset": 2048, 00:19:26.122 "data_size": 63488 00:19:26.122 } 00:19:26.122 ] 00:19:26.122 }' 00:19:26.122 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.122 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:26.122 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.122 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:26.122 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:26.122 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.122 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:26.122 [2024-10-15 09:19:09.928299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:26.122 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.122 09:19:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:26.122 [2024-10-15 09:19:09.992209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:26.122 [2024-10-15 09:19:09.995253] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:26.380 [2024-10-15 09:19:10.106411] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:26.380 [2024-10-15 09:19:10.107411] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:26.638 [2024-10-15 09:19:10.320052] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:26.638 [2024-10-15 09:19:10.320560] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:26.896 133.67 IOPS, 401.00 MiB/s [2024-10-15T09:19:10.824Z] [2024-10-15 09:19:10.640480] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:27.155 [2024-10-15 09:19:10.868126] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:27.155 [2024-10-15 09:19:10.868862] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:27.155 09:19:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.155 09:19:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.155 09:19:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:27.155 09:19:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:27.155 09:19:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.155 09:19:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.155 09:19:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.155 09:19:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.155 09:19:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:27.155 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.155 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.155 "name": "raid_bdev1", 00:19:27.155 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:27.155 "strip_size_kb": 0, 00:19:27.155 "state": "online", 00:19:27.155 "raid_level": "raid1", 00:19:27.155 "superblock": true, 00:19:27.155 "num_base_bdevs": 2, 00:19:27.155 "num_base_bdevs_discovered": 2, 00:19:27.155 "num_base_bdevs_operational": 2, 00:19:27.155 "process": { 00:19:27.155 "type": "rebuild", 00:19:27.155 "target": "spare", 00:19:27.155 "progress": { 00:19:27.155 "blocks": 10240, 00:19:27.155 "percent": 16 00:19:27.155 } 00:19:27.155 }, 00:19:27.155 "base_bdevs_list": [ 00:19:27.155 { 00:19:27.155 "name": "spare", 00:19:27.155 "uuid": "be580054-ee42-5bb5-aa04-a031a64d8f6b", 00:19:27.155 "is_configured": true, 00:19:27.155 "data_offset": 2048, 00:19:27.155 "data_size": 63488 00:19:27.155 }, 00:19:27.155 { 00:19:27.155 "name": "BaseBdev2", 00:19:27.155 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:27.155 "is_configured": true, 00:19:27.155 "data_offset": 2048, 00:19:27.155 "data_size": 63488 00:19:27.155 } 00:19:27.155 ] 00:19:27.155 }' 00:19:27.155 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:27.414 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=462 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.414 "name": "raid_bdev1", 00:19:27.414 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:27.414 "strip_size_kb": 0, 00:19:27.414 "state": "online", 00:19:27.414 "raid_level": "raid1", 00:19:27.414 "superblock": true, 00:19:27.414 "num_base_bdevs": 2, 00:19:27.414 "num_base_bdevs_discovered": 2, 00:19:27.414 "num_base_bdevs_operational": 2, 00:19:27.414 "process": { 00:19:27.414 "type": "rebuild", 00:19:27.414 "target": "spare", 00:19:27.414 "progress": { 00:19:27.414 "blocks": 12288, 00:19:27.414 "percent": 19 00:19:27.414 } 00:19:27.414 }, 00:19:27.414 "base_bdevs_list": [ 00:19:27.414 { 00:19:27.414 "name": "spare", 00:19:27.414 "uuid": "be580054-ee42-5bb5-aa04-a031a64d8f6b", 00:19:27.414 "is_configured": true, 00:19:27.414 "data_offset": 2048, 00:19:27.414 "data_size": 63488 00:19:27.414 }, 00:19:27.414 { 00:19:27.414 "name": "BaseBdev2", 00:19:27.414 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:27.414 "is_configured": true, 00:19:27.414 "data_offset": 2048, 00:19:27.414 "data_size": 63488 00:19:27.414 } 00:19:27.414 ] 00:19:27.414 }' 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.414 [2024-10-15 09:19:11.227431] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:27.414 [2024-10-15 09:19:11.228200] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.414 09:19:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:27.673 127.25 IOPS, 381.75 MiB/s [2024-10-15T09:19:11.601Z] [2024-10-15 09:19:11.449188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:27.673 [2024-10-15 09:19:11.449496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:27.932 [2024-10-15 09:19:11.736294] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:28.191 [2024-10-15 09:19:11.995213] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:28.450 09:19:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:28.450 09:19:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:28.450 09:19:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:28.450 09:19:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:28.450 09:19:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:28.450 09:19:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:28.450 09:19:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.450 09:19:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.450 09:19:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.450 09:19:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:28.450 09:19:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.450 [2024-10-15 09:19:12.324950] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:28.450 09:19:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.450 "name": "raid_bdev1", 00:19:28.450 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:28.450 "strip_size_kb": 0, 00:19:28.450 "state": "online", 00:19:28.450 "raid_level": "raid1", 00:19:28.450 "superblock": true, 00:19:28.450 "num_base_bdevs": 2, 00:19:28.450 "num_base_bdevs_discovered": 2, 00:19:28.450 "num_base_bdevs_operational": 2, 00:19:28.450 "process": { 00:19:28.450 "type": "rebuild", 00:19:28.450 "target": "spare", 00:19:28.450 "progress": { 00:19:28.450 "blocks": 24576, 00:19:28.450 "percent": 38 00:19:28.450 } 00:19:28.450 }, 00:19:28.450 "base_bdevs_list": [ 00:19:28.450 { 00:19:28.450 "name": "spare", 00:19:28.450 "uuid": "be580054-ee42-5bb5-aa04-a031a64d8f6b", 00:19:28.450 "is_configured": true, 00:19:28.450 "data_offset": 2048, 00:19:28.450 "data_size": 63488 00:19:28.450 }, 00:19:28.450 { 00:19:28.450 "name": "BaseBdev2", 00:19:28.450 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:28.450 "is_configured": true, 00:19:28.450 "data_offset": 2048, 00:19:28.450 "data_size": 63488 00:19:28.450 } 00:19:28.450 ] 00:19:28.450 }' 00:19:28.450 09:19:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.708 112.40 IOPS, 337.20 MiB/s [2024-10-15T09:19:12.636Z] 09:19:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:28.708 09:19:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.708 09:19:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:28.708 09:19:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:28.708 [2024-10-15 09:19:12.546871] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:28.967 [2024-10-15 09:19:12.786327] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:29.226 [2024-10-15 09:19:13.015696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:29.484 [2024-10-15 09:19:13.332758] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:19:29.743 102.33 IOPS, 307.00 MiB/s [2024-10-15T09:19:13.671Z] [2024-10-15 09:19:13.441010] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:19:29.743 09:19:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:29.743 09:19:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:29.744 09:19:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.744 09:19:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:29.744 09:19:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:29.744 09:19:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.744 09:19:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.744 09:19:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.744 09:19:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.744 09:19:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.744 09:19:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.744 09:19:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.744 "name": "raid_bdev1", 00:19:29.744 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:29.744 "strip_size_kb": 0, 00:19:29.744 "state": "online", 00:19:29.744 "raid_level": "raid1", 00:19:29.744 "superblock": true, 00:19:29.744 "num_base_bdevs": 2, 00:19:29.744 "num_base_bdevs_discovered": 2, 00:19:29.744 "num_base_bdevs_operational": 2, 00:19:29.744 "process": { 00:19:29.744 "type": "rebuild", 00:19:29.744 "target": "spare", 00:19:29.744 "progress": { 00:19:29.744 "blocks": 40960, 00:19:29.744 "percent": 64 00:19:29.744 } 00:19:29.744 }, 00:19:29.744 "base_bdevs_list": [ 00:19:29.744 { 00:19:29.744 "name": "spare", 00:19:29.744 "uuid": "be580054-ee42-5bb5-aa04-a031a64d8f6b", 00:19:29.744 "is_configured": true, 00:19:29.744 "data_offset": 2048, 00:19:29.744 "data_size": 63488 00:19:29.744 }, 00:19:29.744 { 00:19:29.744 "name": "BaseBdev2", 00:19:29.744 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:29.744 "is_configured": true, 00:19:29.744 "data_offset": 2048, 00:19:29.744 "data_size": 63488 00:19:29.744 } 00:19:29.744 ] 00:19:29.744 }' 00:19:29.744 09:19:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.744 09:19:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:29.744 09:19:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.744 09:19:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:29.744 09:19:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:30.003 [2024-10-15 09:19:13.883574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:30.831 92.14 IOPS, 276.43 MiB/s [2024-10-15T09:19:14.759Z] 09:19:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:30.831 09:19:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:30.831 09:19:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:30.831 09:19:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:30.831 09:19:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:30.831 09:19:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:30.831 09:19:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.831 09:19:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.831 09:19:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.831 09:19:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:30.831 09:19:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.831 09:19:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.831 "name": "raid_bdev1", 00:19:30.831 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:30.832 "strip_size_kb": 0, 00:19:30.832 "state": "online", 00:19:30.832 "raid_level": "raid1", 00:19:30.832 "superblock": true, 00:19:30.832 "num_base_bdevs": 2, 00:19:30.832 "num_base_bdevs_discovered": 2, 00:19:30.832 "num_base_bdevs_operational": 2, 00:19:30.832 "process": { 00:19:30.832 "type": "rebuild", 00:19:30.832 "target": "spare", 00:19:30.832 "progress": { 00:19:30.832 "blocks": 57344, 00:19:30.832 "percent": 90 00:19:30.832 } 00:19:30.832 }, 00:19:30.832 "base_bdevs_list": [ 00:19:30.832 { 00:19:30.832 "name": "spare", 00:19:30.832 "uuid": "be580054-ee42-5bb5-aa04-a031a64d8f6b", 00:19:30.832 "is_configured": true, 00:19:30.832 "data_offset": 2048, 00:19:30.832 "data_size": 63488 00:19:30.832 }, 00:19:30.832 { 00:19:30.832 "name": "BaseBdev2", 00:19:30.832 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:30.832 "is_configured": true, 00:19:30.832 "data_offset": 2048, 00:19:30.832 "data_size": 63488 00:19:30.832 } 00:19:30.832 ] 00:19:30.832 }' 00:19:30.832 09:19:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.832 09:19:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.832 09:19:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.091 09:19:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:31.091 09:19:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:31.091 [2024-10-15 09:19:14.919924] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:31.351 [2024-10-15 09:19:15.020006] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:31.351 [2024-10-15 09:19:15.031643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.177 85.62 IOPS, 256.88 MiB/s [2024-10-15T09:19:16.105Z] 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.177 "name": "raid_bdev1", 00:19:32.177 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:32.177 "strip_size_kb": 0, 00:19:32.177 "state": "online", 00:19:32.177 "raid_level": "raid1", 00:19:32.177 "superblock": true, 00:19:32.177 "num_base_bdevs": 2, 00:19:32.177 "num_base_bdevs_discovered": 2, 00:19:32.177 "num_base_bdevs_operational": 2, 00:19:32.177 "base_bdevs_list": [ 00:19:32.177 { 00:19:32.177 "name": "spare", 00:19:32.177 "uuid": "be580054-ee42-5bb5-aa04-a031a64d8f6b", 00:19:32.177 "is_configured": true, 00:19:32.177 "data_offset": 2048, 00:19:32.177 "data_size": 63488 00:19:32.177 }, 00:19:32.177 { 00:19:32.177 "name": "BaseBdev2", 00:19:32.177 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:32.177 "is_configured": true, 00:19:32.177 "data_offset": 2048, 00:19:32.177 "data_size": 63488 00:19:32.177 } 00:19:32.177 ] 00:19:32.177 }' 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.177 09:19:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.177 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.177 "name": "raid_bdev1", 00:19:32.177 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:32.177 "strip_size_kb": 0, 00:19:32.177 "state": "online", 00:19:32.177 "raid_level": "raid1", 00:19:32.177 "superblock": true, 00:19:32.177 "num_base_bdevs": 2, 00:19:32.177 "num_base_bdevs_discovered": 2, 00:19:32.177 "num_base_bdevs_operational": 2, 00:19:32.177 "base_bdevs_list": [ 00:19:32.177 { 00:19:32.177 "name": "spare", 00:19:32.177 "uuid": "be580054-ee42-5bb5-aa04-a031a64d8f6b", 00:19:32.177 "is_configured": true, 00:19:32.177 "data_offset": 2048, 00:19:32.177 "data_size": 63488 00:19:32.177 }, 00:19:32.177 { 00:19:32.177 "name": "BaseBdev2", 00:19:32.177 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:32.177 "is_configured": true, 00:19:32.177 "data_offset": 2048, 00:19:32.177 "data_size": 63488 00:19:32.177 } 00:19:32.177 ] 00:19:32.177 }' 00:19:32.177 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.177 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:32.177 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.437 "name": "raid_bdev1", 00:19:32.437 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:32.437 "strip_size_kb": 0, 00:19:32.437 "state": "online", 00:19:32.437 "raid_level": "raid1", 00:19:32.437 "superblock": true, 00:19:32.437 "num_base_bdevs": 2, 00:19:32.437 "num_base_bdevs_discovered": 2, 00:19:32.437 "num_base_bdevs_operational": 2, 00:19:32.437 "base_bdevs_list": [ 00:19:32.437 { 00:19:32.437 "name": "spare", 00:19:32.437 "uuid": "be580054-ee42-5bb5-aa04-a031a64d8f6b", 00:19:32.437 "is_configured": true, 00:19:32.437 "data_offset": 2048, 00:19:32.437 "data_size": 63488 00:19:32.437 }, 00:19:32.437 { 00:19:32.437 "name": "BaseBdev2", 00:19:32.437 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:32.437 "is_configured": true, 00:19:32.437 "data_offset": 2048, 00:19:32.437 "data_size": 63488 00:19:32.437 } 00:19:32.437 ] 00:19:32.437 }' 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.437 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.696 79.44 IOPS, 238.33 MiB/s [2024-10-15T09:19:16.624Z] 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:32.696 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.696 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.696 [2024-10-15 09:19:16.608389] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:32.696 [2024-10-15 09:19:16.608552] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:32.955 00:19:32.955 Latency(us) 00:19:32.955 [2024-10-15T09:19:16.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.955 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:32.955 raid_bdev1 : 9.33 78.21 234.62 0.00 0.00 16929.44 275.55 122016.12 00:19:32.955 [2024-10-15T09:19:16.883Z] =================================================================================================================== 00:19:32.955 [2024-10-15T09:19:16.883Z] Total : 78.21 234.62 0.00 0.00 16929.44 275.55 122016.12 00:19:32.955 { 00:19:32.955 "results": [ 00:19:32.955 { 00:19:32.955 "job": "raid_bdev1", 00:19:32.955 "core_mask": "0x1", 00:19:32.955 "workload": "randrw", 00:19:32.955 "percentage": 50, 00:19:32.955 "status": "finished", 00:19:32.955 "queue_depth": 2, 00:19:32.955 "io_size": 3145728, 00:19:32.955 "runtime": 9.334198, 00:19:32.955 "iops": 78.20704039061525, 00:19:32.955 "mibps": 234.62112117184574, 00:19:32.955 "io_failed": 0, 00:19:32.955 "io_timeout": 0, 00:19:32.955 "avg_latency_us": 16929.44386550436, 00:19:32.955 "min_latency_us": 275.5490909090909, 00:19:32.955 "max_latency_us": 122016.11636363636 00:19:32.955 } 00:19:32.955 ], 00:19:32.955 "core_count": 1 00:19:32.955 } 00:19:32.955 [2024-10-15 09:19:16.721133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.955 [2024-10-15 09:19:16.721220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:32.955 [2024-10-15 09:19:16.721363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:32.955 [2024-10-15 09:19:16.721381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:32.955 09:19:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:19:33.214 /dev/nbd0 00:19:33.214 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:33.473 1+0 records in 00:19:33.473 1+0 records out 00:19:33.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674884 s, 6.1 MB/s 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:33.473 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:33.777 /dev/nbd1 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:33.777 1+0 records in 00:19:33.777 1+0 records out 00:19:33.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528692 s, 7.7 MB/s 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:33.777 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:34.037 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:34.037 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:34.037 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:34.037 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:34.037 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:34.037 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:34.037 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:34.296 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:34.296 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:34.296 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:34.296 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:34.296 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:34.296 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:34.296 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:34.296 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:34.296 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:34.296 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:34.296 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:34.296 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:34.296 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:34.296 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:34.296 09:19:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.555 [2024-10-15 09:19:18.336626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:34.555 [2024-10-15 09:19:18.336704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.555 [2024-10-15 09:19:18.336745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:34.555 [2024-10-15 09:19:18.336762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.555 [2024-10-15 09:19:18.339954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.555 [2024-10-15 09:19:18.340001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:34.555 [2024-10-15 09:19:18.340334] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:34.555 [2024-10-15 09:19:18.340532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:34.555 [2024-10-15 09:19:18.340744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:34.555 spare 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.555 [2024-10-15 09:19:18.440960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:34.555 [2024-10-15 09:19:18.441056] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:34.555 [2024-10-15 09:19:18.441579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:19:34.555 [2024-10-15 09:19:18.441869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:34.555 [2024-10-15 09:19:18.441891] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:34.555 [2024-10-15 09:19:18.442178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.555 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.813 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.813 "name": "raid_bdev1", 00:19:34.813 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:34.813 "strip_size_kb": 0, 00:19:34.813 "state": "online", 00:19:34.813 "raid_level": "raid1", 00:19:34.813 "superblock": true, 00:19:34.813 "num_base_bdevs": 2, 00:19:34.813 "num_base_bdevs_discovered": 2, 00:19:34.813 "num_base_bdevs_operational": 2, 00:19:34.813 "base_bdevs_list": [ 00:19:34.813 { 00:19:34.813 "name": "spare", 00:19:34.813 "uuid": "be580054-ee42-5bb5-aa04-a031a64d8f6b", 00:19:34.813 "is_configured": true, 00:19:34.813 "data_offset": 2048, 00:19:34.813 "data_size": 63488 00:19:34.813 }, 00:19:34.813 { 00:19:34.813 "name": "BaseBdev2", 00:19:34.813 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:34.813 "is_configured": true, 00:19:34.813 "data_offset": 2048, 00:19:34.813 "data_size": 63488 00:19:34.813 } 00:19:34.813 ] 00:19:34.813 }' 00:19:34.813 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.813 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.071 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:35.071 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.071 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:35.071 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:35.071 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.071 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.071 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.071 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.071 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.071 09:19:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.330 "name": "raid_bdev1", 00:19:35.330 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:35.330 "strip_size_kb": 0, 00:19:35.330 "state": "online", 00:19:35.330 "raid_level": "raid1", 00:19:35.330 "superblock": true, 00:19:35.330 "num_base_bdevs": 2, 00:19:35.330 "num_base_bdevs_discovered": 2, 00:19:35.330 "num_base_bdevs_operational": 2, 00:19:35.330 "base_bdevs_list": [ 00:19:35.330 { 00:19:35.330 "name": "spare", 00:19:35.330 "uuid": "be580054-ee42-5bb5-aa04-a031a64d8f6b", 00:19:35.330 "is_configured": true, 00:19:35.330 "data_offset": 2048, 00:19:35.330 "data_size": 63488 00:19:35.330 }, 00:19:35.330 { 00:19:35.330 "name": "BaseBdev2", 00:19:35.330 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:35.330 "is_configured": true, 00:19:35.330 "data_offset": 2048, 00:19:35.330 "data_size": 63488 00:19:35.330 } 00:19:35.330 ] 00:19:35.330 }' 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.330 [2024-10-15 09:19:19.177237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.330 "name": "raid_bdev1", 00:19:35.330 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:35.330 "strip_size_kb": 0, 00:19:35.330 "state": "online", 00:19:35.330 "raid_level": "raid1", 00:19:35.330 "superblock": true, 00:19:35.330 "num_base_bdevs": 2, 00:19:35.330 "num_base_bdevs_discovered": 1, 00:19:35.330 "num_base_bdevs_operational": 1, 00:19:35.330 "base_bdevs_list": [ 00:19:35.330 { 00:19:35.330 "name": null, 00:19:35.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.330 "is_configured": false, 00:19:35.330 "data_offset": 0, 00:19:35.330 "data_size": 63488 00:19:35.330 }, 00:19:35.330 { 00:19:35.330 "name": "BaseBdev2", 00:19:35.330 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:35.330 "is_configured": true, 00:19:35.330 "data_offset": 2048, 00:19:35.330 "data_size": 63488 00:19:35.330 } 00:19:35.330 ] 00:19:35.330 }' 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.330 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.897 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:35.897 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.897 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.897 [2024-10-15 09:19:19.697591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:35.897 [2024-10-15 09:19:19.697886] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:35.897 [2024-10-15 09:19:19.697930] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:35.897 [2024-10-15 09:19:19.697992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:35.897 [2024-10-15 09:19:19.716327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:19:35.897 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.897 09:19:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:35.897 [2024-10-15 09:19:19.719330] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:36.876 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.876 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.876 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:36.876 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:36.876 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.876 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.876 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.876 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.876 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.876 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.876 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.876 "name": "raid_bdev1", 00:19:36.876 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:36.876 "strip_size_kb": 0, 00:19:36.876 "state": "online", 00:19:36.876 "raid_level": "raid1", 00:19:36.876 "superblock": true, 00:19:36.876 "num_base_bdevs": 2, 00:19:36.876 "num_base_bdevs_discovered": 2, 00:19:36.876 "num_base_bdevs_operational": 2, 00:19:36.876 "process": { 00:19:36.876 "type": "rebuild", 00:19:36.876 "target": "spare", 00:19:36.876 "progress": { 00:19:36.876 "blocks": 20480, 00:19:36.876 "percent": 32 00:19:36.876 } 00:19:36.876 }, 00:19:36.876 "base_bdevs_list": [ 00:19:36.876 { 00:19:36.876 "name": "spare", 00:19:36.876 "uuid": "be580054-ee42-5bb5-aa04-a031a64d8f6b", 00:19:36.876 "is_configured": true, 00:19:36.876 "data_offset": 2048, 00:19:36.877 "data_size": 63488 00:19:36.877 }, 00:19:36.877 { 00:19:36.877 "name": "BaseBdev2", 00:19:36.877 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:36.877 "is_configured": true, 00:19:36.877 "data_offset": 2048, 00:19:36.877 "data_size": 63488 00:19:36.877 } 00:19:36.877 ] 00:19:36.877 }' 00:19:36.877 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:37.135 [2024-10-15 09:19:20.889514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:37.135 [2024-10-15 09:19:20.931066] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:37.135 [2024-10-15 09:19:20.931214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.135 [2024-10-15 09:19:20.931241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:37.135 [2024-10-15 09:19:20.931256] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:37.135 09:19:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.135 09:19:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.135 "name": "raid_bdev1", 00:19:37.135 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:37.136 "strip_size_kb": 0, 00:19:37.136 "state": "online", 00:19:37.136 "raid_level": "raid1", 00:19:37.136 "superblock": true, 00:19:37.136 "num_base_bdevs": 2, 00:19:37.136 "num_base_bdevs_discovered": 1, 00:19:37.136 "num_base_bdevs_operational": 1, 00:19:37.136 "base_bdevs_list": [ 00:19:37.136 { 00:19:37.136 "name": null, 00:19:37.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.136 "is_configured": false, 00:19:37.136 "data_offset": 0, 00:19:37.136 "data_size": 63488 00:19:37.136 }, 00:19:37.136 { 00:19:37.136 "name": "BaseBdev2", 00:19:37.136 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:37.136 "is_configured": true, 00:19:37.136 "data_offset": 2048, 00:19:37.136 "data_size": 63488 00:19:37.136 } 00:19:37.136 ] 00:19:37.136 }' 00:19:37.136 09:19:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.136 09:19:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:37.704 09:19:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:37.704 09:19:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.704 09:19:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:37.704 [2024-10-15 09:19:21.484722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:37.704 [2024-10-15 09:19:21.484957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.704 [2024-10-15 09:19:21.485038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:37.704 [2024-10-15 09:19:21.485263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.704 [2024-10-15 09:19:21.485940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.704 [2024-10-15 09:19:21.485983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:37.704 [2024-10-15 09:19:21.486132] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:37.704 [2024-10-15 09:19:21.486161] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:37.704 [2024-10-15 09:19:21.486176] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:37.704 [2024-10-15 09:19:21.486227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:37.704 [2024-10-15 09:19:21.504283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:19:37.704 spare 00:19:37.704 09:19:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.704 09:19:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:37.704 [2024-10-15 09:19:21.506991] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:38.640 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.640 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.640 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:38.640 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:38.640 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.640 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.640 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.640 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:38.640 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.640 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.900 "name": "raid_bdev1", 00:19:38.900 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:38.900 "strip_size_kb": 0, 00:19:38.900 "state": "online", 00:19:38.900 "raid_level": "raid1", 00:19:38.900 "superblock": true, 00:19:38.900 "num_base_bdevs": 2, 00:19:38.900 "num_base_bdevs_discovered": 2, 00:19:38.900 "num_base_bdevs_operational": 2, 00:19:38.900 "process": { 00:19:38.900 "type": "rebuild", 00:19:38.900 "target": "spare", 00:19:38.900 "progress": { 00:19:38.900 "blocks": 20480, 00:19:38.900 "percent": 32 00:19:38.900 } 00:19:38.900 }, 00:19:38.900 "base_bdevs_list": [ 00:19:38.900 { 00:19:38.900 "name": "spare", 00:19:38.900 "uuid": "be580054-ee42-5bb5-aa04-a031a64d8f6b", 00:19:38.900 "is_configured": true, 00:19:38.900 "data_offset": 2048, 00:19:38.900 "data_size": 63488 00:19:38.900 }, 00:19:38.900 { 00:19:38.900 "name": "BaseBdev2", 00:19:38.900 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:38.900 "is_configured": true, 00:19:38.900 "data_offset": 2048, 00:19:38.900 "data_size": 63488 00:19:38.900 } 00:19:38.900 ] 00:19:38.900 }' 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:38.900 [2024-10-15 09:19:22.680731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:38.900 [2024-10-15 09:19:22.718336] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:38.900 [2024-10-15 09:19:22.718475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.900 [2024-10-15 09:19:22.718506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:38.900 [2024-10-15 09:19:22.718518] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.900 "name": "raid_bdev1", 00:19:38.900 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:38.900 "strip_size_kb": 0, 00:19:38.900 "state": "online", 00:19:38.900 "raid_level": "raid1", 00:19:38.900 "superblock": true, 00:19:38.900 "num_base_bdevs": 2, 00:19:38.900 "num_base_bdevs_discovered": 1, 00:19:38.900 "num_base_bdevs_operational": 1, 00:19:38.900 "base_bdevs_list": [ 00:19:38.900 { 00:19:38.900 "name": null, 00:19:38.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.900 "is_configured": false, 00:19:38.900 "data_offset": 0, 00:19:38.900 "data_size": 63488 00:19:38.900 }, 00:19:38.900 { 00:19:38.900 "name": "BaseBdev2", 00:19:38.900 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:38.900 "is_configured": true, 00:19:38.900 "data_offset": 2048, 00:19:38.900 "data_size": 63488 00:19:38.900 } 00:19:38.900 ] 00:19:38.900 }' 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.900 09:19:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:39.467 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:39.467 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.467 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:39.467 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:39.467 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.467 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.468 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.468 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.468 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:39.468 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.468 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.468 "name": "raid_bdev1", 00:19:39.468 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:39.468 "strip_size_kb": 0, 00:19:39.468 "state": "online", 00:19:39.468 "raid_level": "raid1", 00:19:39.468 "superblock": true, 00:19:39.468 "num_base_bdevs": 2, 00:19:39.468 "num_base_bdevs_discovered": 1, 00:19:39.468 "num_base_bdevs_operational": 1, 00:19:39.468 "base_bdevs_list": [ 00:19:39.468 { 00:19:39.468 "name": null, 00:19:39.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.468 "is_configured": false, 00:19:39.468 "data_offset": 0, 00:19:39.468 "data_size": 63488 00:19:39.468 }, 00:19:39.468 { 00:19:39.468 "name": "BaseBdev2", 00:19:39.468 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:39.468 "is_configured": true, 00:19:39.468 "data_offset": 2048, 00:19:39.468 "data_size": 63488 00:19:39.468 } 00:19:39.468 ] 00:19:39.468 }' 00:19:39.468 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.727 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:39.727 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.727 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:39.727 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:39.727 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.727 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:39.727 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.727 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:39.727 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.727 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:39.727 [2024-10-15 09:19:23.471679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:39.727 [2024-10-15 09:19:23.471766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.727 [2024-10-15 09:19:23.471808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:39.727 [2024-10-15 09:19:23.471826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.727 [2024-10-15 09:19:23.472487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.727 [2024-10-15 09:19:23.472519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:39.727 [2024-10-15 09:19:23.472643] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:39.727 [2024-10-15 09:19:23.472673] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:39.727 [2024-10-15 09:19:23.472691] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:39.727 [2024-10-15 09:19:23.472706] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:39.727 BaseBdev1 00:19:39.727 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.727 09:19:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:40.664 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:40.664 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.664 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.664 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.664 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.664 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:40.664 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.664 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.664 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.664 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.664 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.664 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.664 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:40.664 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.664 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.664 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.664 "name": "raid_bdev1", 00:19:40.664 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:40.664 "strip_size_kb": 0, 00:19:40.664 "state": "online", 00:19:40.664 "raid_level": "raid1", 00:19:40.664 "superblock": true, 00:19:40.664 "num_base_bdevs": 2, 00:19:40.664 "num_base_bdevs_discovered": 1, 00:19:40.664 "num_base_bdevs_operational": 1, 00:19:40.664 "base_bdevs_list": [ 00:19:40.664 { 00:19:40.664 "name": null, 00:19:40.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.664 "is_configured": false, 00:19:40.664 "data_offset": 0, 00:19:40.664 "data_size": 63488 00:19:40.664 }, 00:19:40.664 { 00:19:40.664 "name": "BaseBdev2", 00:19:40.664 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:40.664 "is_configured": true, 00:19:40.664 "data_offset": 2048, 00:19:40.664 "data_size": 63488 00:19:40.664 } 00:19:40.664 ] 00:19:40.664 }' 00:19:40.664 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.664 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:41.231 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:41.231 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.231 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:41.231 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:41.231 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.231 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.231 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.231 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.231 09:19:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:41.231 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.231 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.231 "name": "raid_bdev1", 00:19:41.231 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:41.231 "strip_size_kb": 0, 00:19:41.231 "state": "online", 00:19:41.231 "raid_level": "raid1", 00:19:41.231 "superblock": true, 00:19:41.231 "num_base_bdevs": 2, 00:19:41.231 "num_base_bdevs_discovered": 1, 00:19:41.231 "num_base_bdevs_operational": 1, 00:19:41.231 "base_bdevs_list": [ 00:19:41.231 { 00:19:41.231 "name": null, 00:19:41.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.231 "is_configured": false, 00:19:41.231 "data_offset": 0, 00:19:41.232 "data_size": 63488 00:19:41.232 }, 00:19:41.232 { 00:19:41.232 "name": "BaseBdev2", 00:19:41.232 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:41.232 "is_configured": true, 00:19:41.232 "data_offset": 2048, 00:19:41.232 "data_size": 63488 00:19:41.232 } 00:19:41.232 ] 00:19:41.232 }' 00:19:41.232 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.232 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:41.232 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.232 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:41.232 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:41.232 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:19:41.232 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:41.232 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:41.232 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:41.232 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:41.232 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:41.232 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:41.232 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.232 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:41.490 [2024-10-15 09:19:25.160363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:41.491 [2024-10-15 09:19:25.160611] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:41.491 [2024-10-15 09:19:25.160642] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:41.491 request: 00:19:41.491 { 00:19:41.491 "base_bdev": "BaseBdev1", 00:19:41.491 "raid_bdev": "raid_bdev1", 00:19:41.491 "method": "bdev_raid_add_base_bdev", 00:19:41.491 "req_id": 1 00:19:41.491 } 00:19:41.491 Got JSON-RPC error response 00:19:41.491 response: 00:19:41.491 { 00:19:41.491 "code": -22, 00:19:41.491 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:41.491 } 00:19:41.491 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:41.491 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:19:41.491 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:41.491 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:41.491 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:41.491 09:19:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:42.426 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:42.426 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:42.426 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:42.426 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:42.426 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:42.426 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:42.426 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.426 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.426 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.426 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.426 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.426 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.426 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.426 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:42.426 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.426 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.426 "name": "raid_bdev1", 00:19:42.426 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:42.426 "strip_size_kb": 0, 00:19:42.426 "state": "online", 00:19:42.426 "raid_level": "raid1", 00:19:42.426 "superblock": true, 00:19:42.426 "num_base_bdevs": 2, 00:19:42.426 "num_base_bdevs_discovered": 1, 00:19:42.426 "num_base_bdevs_operational": 1, 00:19:42.426 "base_bdevs_list": [ 00:19:42.426 { 00:19:42.426 "name": null, 00:19:42.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.426 "is_configured": false, 00:19:42.426 "data_offset": 0, 00:19:42.426 "data_size": 63488 00:19:42.426 }, 00:19:42.426 { 00:19:42.426 "name": "BaseBdev2", 00:19:42.426 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:42.426 "is_configured": true, 00:19:42.426 "data_offset": 2048, 00:19:42.426 "data_size": 63488 00:19:42.426 } 00:19:42.426 ] 00:19:42.426 }' 00:19:42.426 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.426 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:42.993 "name": "raid_bdev1", 00:19:42.993 "uuid": "42cc8240-dba5-4b88-a251-129254406a6d", 00:19:42.993 "strip_size_kb": 0, 00:19:42.993 "state": "online", 00:19:42.993 "raid_level": "raid1", 00:19:42.993 "superblock": true, 00:19:42.993 "num_base_bdevs": 2, 00:19:42.993 "num_base_bdevs_discovered": 1, 00:19:42.993 "num_base_bdevs_operational": 1, 00:19:42.993 "base_bdevs_list": [ 00:19:42.993 { 00:19:42.993 "name": null, 00:19:42.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.993 "is_configured": false, 00:19:42.993 "data_offset": 0, 00:19:42.993 "data_size": 63488 00:19:42.993 }, 00:19:42.993 { 00:19:42.993 "name": "BaseBdev2", 00:19:42.993 "uuid": "08e59cef-e8c5-5d74-ba96-61c2a3df5c3a", 00:19:42.993 "is_configured": true, 00:19:42.993 "data_offset": 2048, 00:19:42.993 "data_size": 63488 00:19:42.993 } 00:19:42.993 ] 00:19:42.993 }' 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77357 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 77357 ']' 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 77357 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.993 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77357 00:19:42.993 killing process with pid 77357 00:19:42.993 Received shutdown signal, test time was about 19.548392 seconds 00:19:42.993 00:19:42.993 Latency(us) 00:19:42.994 [2024-10-15T09:19:26.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.994 [2024-10-15T09:19:26.922Z] =================================================================================================================== 00:19:42.994 [2024-10-15T09:19:26.922Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.994 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:42.994 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:42.994 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77357' 00:19:42.994 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 77357 00:19:42.994 [2024-10-15 09:19:26.914546] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:42.994 09:19:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 77357 00:19:42.994 [2024-10-15 09:19:26.914735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:42.994 [2024-10-15 09:19:26.914815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:42.994 [2024-10-15 09:19:26.914840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:43.253 [2024-10-15 09:19:27.139669] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:44.667 09:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:19:44.667 00:19:44.667 real 0m23.030s 00:19:44.667 user 0m30.900s 00:19:44.667 sys 0m2.215s 00:19:44.667 09:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:44.667 09:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.667 ************************************ 00:19:44.667 END TEST raid_rebuild_test_sb_io 00:19:44.667 ************************************ 00:19:44.667 09:19:28 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:19:44.667 09:19:28 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:19:44.667 09:19:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:44.667 09:19:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:44.667 09:19:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:44.667 ************************************ 00:19:44.667 START TEST raid_rebuild_test 00:19:44.667 ************************************ 00:19:44.667 09:19:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:19:44.667 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:44.667 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:44.667 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:44.667 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:44.667 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:44.667 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:44.667 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.667 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:44.667 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.667 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.667 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:44.667 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.667 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.667 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=78076 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 78076 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 78076 ']' 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:44.668 09:19:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.668 [2024-10-15 09:19:28.476005] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:19:44.668 [2024-10-15 09:19:28.476471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:19:44.668 Zero copy mechanism will not be used. 00:19:44.668 -allocations --file-prefix=spdk_pid78076 ] 00:19:44.927 [2024-10-15 09:19:28.643441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.927 [2024-10-15 09:19:28.791555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.186 [2024-10-15 09:19:29.014300] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:45.186 [2024-10-15 09:19:29.014382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.754 BaseBdev1_malloc 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.754 [2024-10-15 09:19:29.563920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:45.754 [2024-10-15 09:19:29.564014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.754 [2024-10-15 09:19:29.564048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:45.754 [2024-10-15 09:19:29.564067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.754 [2024-10-15 09:19:29.567091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.754 [2024-10-15 09:19:29.567155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:45.754 BaseBdev1 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.754 BaseBdev2_malloc 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.754 [2024-10-15 09:19:29.623185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:45.754 [2024-10-15 09:19:29.623271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.754 [2024-10-15 09:19:29.623303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:45.754 [2024-10-15 09:19:29.623321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.754 [2024-10-15 09:19:29.626295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.754 [2024-10-15 09:19:29.626485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:45.754 BaseBdev2 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.754 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.013 BaseBdev3_malloc 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.013 [2024-10-15 09:19:29.693804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:46.013 [2024-10-15 09:19:29.694040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.013 [2024-10-15 09:19:29.694086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:46.013 [2024-10-15 09:19:29.694106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.013 [2024-10-15 09:19:29.697095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.013 [2024-10-15 09:19:29.697291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:46.013 BaseBdev3 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.013 BaseBdev4_malloc 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.013 [2024-10-15 09:19:29.753297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:46.013 [2024-10-15 09:19:29.753391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.013 [2024-10-15 09:19:29.753432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:46.013 [2024-10-15 09:19:29.753452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.013 [2024-10-15 09:19:29.756438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.013 [2024-10-15 09:19:29.756620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:46.013 BaseBdev4 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.013 spare_malloc 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.013 spare_delay 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.013 [2024-10-15 09:19:29.821070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:46.013 [2024-10-15 09:19:29.821176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.013 [2024-10-15 09:19:29.821214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:46.013 [2024-10-15 09:19:29.821233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.013 [2024-10-15 09:19:29.824202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.013 [2024-10-15 09:19:29.824253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:46.013 spare 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.013 [2024-10-15 09:19:29.833230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:46.013 [2024-10-15 09:19:29.835850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:46.013 [2024-10-15 09:19:29.836083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:46.013 [2024-10-15 09:19:29.836193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:46.013 [2024-10-15 09:19:29.836334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:46.013 [2024-10-15 09:19:29.836355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:46.013 [2024-10-15 09:19:29.836750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:46.013 [2024-10-15 09:19:29.837003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:46.013 [2024-10-15 09:19:29.837024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:46.013 [2024-10-15 09:19:29.837317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.013 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.014 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.014 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.014 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.014 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.014 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.014 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.014 "name": "raid_bdev1", 00:19:46.014 "uuid": "52561047-34e9-42da-bb03-2c5f8ffbf2b8", 00:19:46.014 "strip_size_kb": 0, 00:19:46.014 "state": "online", 00:19:46.014 "raid_level": "raid1", 00:19:46.014 "superblock": false, 00:19:46.014 "num_base_bdevs": 4, 00:19:46.014 "num_base_bdevs_discovered": 4, 00:19:46.014 "num_base_bdevs_operational": 4, 00:19:46.014 "base_bdevs_list": [ 00:19:46.014 { 00:19:46.014 "name": "BaseBdev1", 00:19:46.014 "uuid": "beda9be3-1250-563a-8fdd-841bbd50892e", 00:19:46.014 "is_configured": true, 00:19:46.014 "data_offset": 0, 00:19:46.014 "data_size": 65536 00:19:46.014 }, 00:19:46.014 { 00:19:46.014 "name": "BaseBdev2", 00:19:46.014 "uuid": "5f3ab138-e47d-5000-9cd1-33d36282a0b5", 00:19:46.014 "is_configured": true, 00:19:46.014 "data_offset": 0, 00:19:46.014 "data_size": 65536 00:19:46.014 }, 00:19:46.014 { 00:19:46.014 "name": "BaseBdev3", 00:19:46.014 "uuid": "dcc1ead0-ac6d-5980-8727-beca15306d15", 00:19:46.014 "is_configured": true, 00:19:46.014 "data_offset": 0, 00:19:46.014 "data_size": 65536 00:19:46.014 }, 00:19:46.014 { 00:19:46.014 "name": "BaseBdev4", 00:19:46.014 "uuid": "b2b69300-58d2-5ad8-b337-4aaa4ee0b6e5", 00:19:46.014 "is_configured": true, 00:19:46.014 "data_offset": 0, 00:19:46.014 "data_size": 65536 00:19:46.014 } 00:19:46.014 ] 00:19:46.014 }' 00:19:46.014 09:19:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.014 09:19:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:46.580 [2024-10-15 09:19:30.365871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:46.580 09:19:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:47.146 [2024-10-15 09:19:30.773623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:47.146 /dev/nbd0 00:19:47.146 09:19:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:47.146 09:19:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:47.146 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:47.146 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:19:47.146 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:47.146 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:47.146 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:47.146 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:19:47.147 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:47.147 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:47.147 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:47.147 1+0 records in 00:19:47.147 1+0 records out 00:19:47.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371102 s, 11.0 MB/s 00:19:47.147 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:47.147 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:19:47.147 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:47.147 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:47.147 09:19:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:19:47.147 09:19:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:47.147 09:19:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:47.147 09:19:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:47.147 09:19:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:47.147 09:19:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:19:57.119 65536+0 records in 00:19:57.119 65536+0 records out 00:19:57.119 33554432 bytes (34 MB, 32 MiB) copied, 8.60013 s, 3.9 MB/s 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:57.119 [2024-10-15 09:19:39.682817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.119 [2024-10-15 09:19:39.710900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.119 "name": "raid_bdev1", 00:19:57.119 "uuid": "52561047-34e9-42da-bb03-2c5f8ffbf2b8", 00:19:57.119 "strip_size_kb": 0, 00:19:57.119 "state": "online", 00:19:57.119 "raid_level": "raid1", 00:19:57.119 "superblock": false, 00:19:57.119 "num_base_bdevs": 4, 00:19:57.119 "num_base_bdevs_discovered": 3, 00:19:57.119 "num_base_bdevs_operational": 3, 00:19:57.119 "base_bdevs_list": [ 00:19:57.119 { 00:19:57.119 "name": null, 00:19:57.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.119 "is_configured": false, 00:19:57.119 "data_offset": 0, 00:19:57.119 "data_size": 65536 00:19:57.119 }, 00:19:57.119 { 00:19:57.119 "name": "BaseBdev2", 00:19:57.119 "uuid": "5f3ab138-e47d-5000-9cd1-33d36282a0b5", 00:19:57.119 "is_configured": true, 00:19:57.119 "data_offset": 0, 00:19:57.119 "data_size": 65536 00:19:57.119 }, 00:19:57.119 { 00:19:57.119 "name": "BaseBdev3", 00:19:57.119 "uuid": "dcc1ead0-ac6d-5980-8727-beca15306d15", 00:19:57.119 "is_configured": true, 00:19:57.119 "data_offset": 0, 00:19:57.119 "data_size": 65536 00:19:57.119 }, 00:19:57.119 { 00:19:57.119 "name": "BaseBdev4", 00:19:57.119 "uuid": "b2b69300-58d2-5ad8-b337-4aaa4ee0b6e5", 00:19:57.119 "is_configured": true, 00:19:57.119 "data_offset": 0, 00:19:57.119 "data_size": 65536 00:19:57.119 } 00:19:57.119 ] 00:19:57.119 }' 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.119 09:19:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.119 09:19:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:57.119 09:19:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.119 09:19:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.119 [2024-10-15 09:19:40.231101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:57.119 [2024-10-15 09:19:40.246815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:19:57.119 09:19:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.119 09:19:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:57.119 [2024-10-15 09:19:40.249561] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:57.378 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:57.378 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.378 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:57.378 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:57.378 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.378 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.378 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.378 09:19:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.378 09:19:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.378 09:19:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.637 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.637 "name": "raid_bdev1", 00:19:57.637 "uuid": "52561047-34e9-42da-bb03-2c5f8ffbf2b8", 00:19:57.637 "strip_size_kb": 0, 00:19:57.637 "state": "online", 00:19:57.637 "raid_level": "raid1", 00:19:57.637 "superblock": false, 00:19:57.637 "num_base_bdevs": 4, 00:19:57.637 "num_base_bdevs_discovered": 4, 00:19:57.637 "num_base_bdevs_operational": 4, 00:19:57.637 "process": { 00:19:57.637 "type": "rebuild", 00:19:57.637 "target": "spare", 00:19:57.637 "progress": { 00:19:57.637 "blocks": 20480, 00:19:57.637 "percent": 31 00:19:57.637 } 00:19:57.637 }, 00:19:57.637 "base_bdevs_list": [ 00:19:57.637 { 00:19:57.637 "name": "spare", 00:19:57.637 "uuid": "95a7d63c-0499-5746-9a53-daf4bc5090c1", 00:19:57.637 "is_configured": true, 00:19:57.637 "data_offset": 0, 00:19:57.637 "data_size": 65536 00:19:57.637 }, 00:19:57.637 { 00:19:57.637 "name": "BaseBdev2", 00:19:57.637 "uuid": "5f3ab138-e47d-5000-9cd1-33d36282a0b5", 00:19:57.637 "is_configured": true, 00:19:57.637 "data_offset": 0, 00:19:57.637 "data_size": 65536 00:19:57.637 }, 00:19:57.637 { 00:19:57.637 "name": "BaseBdev3", 00:19:57.637 "uuid": "dcc1ead0-ac6d-5980-8727-beca15306d15", 00:19:57.637 "is_configured": true, 00:19:57.637 "data_offset": 0, 00:19:57.637 "data_size": 65536 00:19:57.637 }, 00:19:57.637 { 00:19:57.637 "name": "BaseBdev4", 00:19:57.637 "uuid": "b2b69300-58d2-5ad8-b337-4aaa4ee0b6e5", 00:19:57.637 "is_configured": true, 00:19:57.637 "data_offset": 0, 00:19:57.637 "data_size": 65536 00:19:57.637 } 00:19:57.637 ] 00:19:57.637 }' 00:19:57.637 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.637 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:57.637 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.637 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:57.637 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:57.637 09:19:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.637 09:19:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.637 [2024-10-15 09:19:41.412570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:57.637 [2024-10-15 09:19:41.461433] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:57.637 [2024-10-15 09:19:41.461577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.637 [2024-10-15 09:19:41.461604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:57.637 [2024-10-15 09:19:41.461619] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:57.637 09:19:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.637 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:57.637 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.637 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.637 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.637 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.637 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:57.637 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.637 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.637 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.638 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.638 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.638 09:19:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.638 09:19:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.638 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.638 09:19:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.638 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.638 "name": "raid_bdev1", 00:19:57.638 "uuid": "52561047-34e9-42da-bb03-2c5f8ffbf2b8", 00:19:57.638 "strip_size_kb": 0, 00:19:57.638 "state": "online", 00:19:57.638 "raid_level": "raid1", 00:19:57.638 "superblock": false, 00:19:57.638 "num_base_bdevs": 4, 00:19:57.638 "num_base_bdevs_discovered": 3, 00:19:57.638 "num_base_bdevs_operational": 3, 00:19:57.638 "base_bdevs_list": [ 00:19:57.638 { 00:19:57.638 "name": null, 00:19:57.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.638 "is_configured": false, 00:19:57.638 "data_offset": 0, 00:19:57.638 "data_size": 65536 00:19:57.638 }, 00:19:57.638 { 00:19:57.638 "name": "BaseBdev2", 00:19:57.638 "uuid": "5f3ab138-e47d-5000-9cd1-33d36282a0b5", 00:19:57.638 "is_configured": true, 00:19:57.638 "data_offset": 0, 00:19:57.638 "data_size": 65536 00:19:57.638 }, 00:19:57.638 { 00:19:57.638 "name": "BaseBdev3", 00:19:57.638 "uuid": "dcc1ead0-ac6d-5980-8727-beca15306d15", 00:19:57.638 "is_configured": true, 00:19:57.638 "data_offset": 0, 00:19:57.638 "data_size": 65536 00:19:57.638 }, 00:19:57.638 { 00:19:57.638 "name": "BaseBdev4", 00:19:57.638 "uuid": "b2b69300-58d2-5ad8-b337-4aaa4ee0b6e5", 00:19:57.638 "is_configured": true, 00:19:57.638 "data_offset": 0, 00:19:57.638 "data_size": 65536 00:19:57.638 } 00:19:57.638 ] 00:19:57.638 }' 00:19:57.638 09:19:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.638 09:19:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.204 09:19:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:58.204 09:19:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.204 09:19:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:58.204 09:19:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:58.204 09:19:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.204 09:19:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.204 09:19:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.204 09:19:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.204 09:19:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.204 09:19:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.204 09:19:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.204 "name": "raid_bdev1", 00:19:58.204 "uuid": "52561047-34e9-42da-bb03-2c5f8ffbf2b8", 00:19:58.204 "strip_size_kb": 0, 00:19:58.204 "state": "online", 00:19:58.205 "raid_level": "raid1", 00:19:58.205 "superblock": false, 00:19:58.205 "num_base_bdevs": 4, 00:19:58.205 "num_base_bdevs_discovered": 3, 00:19:58.205 "num_base_bdevs_operational": 3, 00:19:58.205 "base_bdevs_list": [ 00:19:58.205 { 00:19:58.205 "name": null, 00:19:58.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.205 "is_configured": false, 00:19:58.205 "data_offset": 0, 00:19:58.205 "data_size": 65536 00:19:58.205 }, 00:19:58.205 { 00:19:58.205 "name": "BaseBdev2", 00:19:58.205 "uuid": "5f3ab138-e47d-5000-9cd1-33d36282a0b5", 00:19:58.205 "is_configured": true, 00:19:58.205 "data_offset": 0, 00:19:58.205 "data_size": 65536 00:19:58.205 }, 00:19:58.205 { 00:19:58.205 "name": "BaseBdev3", 00:19:58.205 "uuid": "dcc1ead0-ac6d-5980-8727-beca15306d15", 00:19:58.205 "is_configured": true, 00:19:58.205 "data_offset": 0, 00:19:58.205 "data_size": 65536 00:19:58.205 }, 00:19:58.205 { 00:19:58.205 "name": "BaseBdev4", 00:19:58.205 "uuid": "b2b69300-58d2-5ad8-b337-4aaa4ee0b6e5", 00:19:58.205 "is_configured": true, 00:19:58.205 "data_offset": 0, 00:19:58.205 "data_size": 65536 00:19:58.205 } 00:19:58.205 ] 00:19:58.205 }' 00:19:58.205 09:19:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.462 09:19:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:58.462 09:19:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.462 09:19:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:58.462 09:19:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:58.462 09:19:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.462 09:19:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.462 [2024-10-15 09:19:42.188468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:58.462 [2024-10-15 09:19:42.202602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:19:58.462 09:19:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.462 09:19:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:58.462 [2024-10-15 09:19:42.205493] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:59.419 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:59.419 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:59.419 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:59.419 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:59.419 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:59.419 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.419 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.419 09:19:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.419 09:19:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.419 09:19:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.419 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:59.419 "name": "raid_bdev1", 00:19:59.419 "uuid": "52561047-34e9-42da-bb03-2c5f8ffbf2b8", 00:19:59.419 "strip_size_kb": 0, 00:19:59.419 "state": "online", 00:19:59.419 "raid_level": "raid1", 00:19:59.419 "superblock": false, 00:19:59.419 "num_base_bdevs": 4, 00:19:59.419 "num_base_bdevs_discovered": 4, 00:19:59.419 "num_base_bdevs_operational": 4, 00:19:59.419 "process": { 00:19:59.419 "type": "rebuild", 00:19:59.419 "target": "spare", 00:19:59.419 "progress": { 00:19:59.419 "blocks": 20480, 00:19:59.419 "percent": 31 00:19:59.419 } 00:19:59.419 }, 00:19:59.419 "base_bdevs_list": [ 00:19:59.419 { 00:19:59.419 "name": "spare", 00:19:59.419 "uuid": "95a7d63c-0499-5746-9a53-daf4bc5090c1", 00:19:59.419 "is_configured": true, 00:19:59.419 "data_offset": 0, 00:19:59.419 "data_size": 65536 00:19:59.419 }, 00:19:59.419 { 00:19:59.419 "name": "BaseBdev2", 00:19:59.419 "uuid": "5f3ab138-e47d-5000-9cd1-33d36282a0b5", 00:19:59.419 "is_configured": true, 00:19:59.419 "data_offset": 0, 00:19:59.419 "data_size": 65536 00:19:59.419 }, 00:19:59.419 { 00:19:59.419 "name": "BaseBdev3", 00:19:59.419 "uuid": "dcc1ead0-ac6d-5980-8727-beca15306d15", 00:19:59.419 "is_configured": true, 00:19:59.419 "data_offset": 0, 00:19:59.419 "data_size": 65536 00:19:59.419 }, 00:19:59.419 { 00:19:59.419 "name": "BaseBdev4", 00:19:59.419 "uuid": "b2b69300-58d2-5ad8-b337-4aaa4ee0b6e5", 00:19:59.419 "is_configured": true, 00:19:59.419 "data_offset": 0, 00:19:59.419 "data_size": 65536 00:19:59.419 } 00:19:59.419 ] 00:19:59.419 }' 00:19:59.419 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:59.419 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:59.419 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.678 [2024-10-15 09:19:43.379463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:59.678 [2024-10-15 09:19:43.417163] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.678 09:19:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:59.679 "name": "raid_bdev1", 00:19:59.679 "uuid": "52561047-34e9-42da-bb03-2c5f8ffbf2b8", 00:19:59.679 "strip_size_kb": 0, 00:19:59.679 "state": "online", 00:19:59.679 "raid_level": "raid1", 00:19:59.679 "superblock": false, 00:19:59.679 "num_base_bdevs": 4, 00:19:59.679 "num_base_bdevs_discovered": 3, 00:19:59.679 "num_base_bdevs_operational": 3, 00:19:59.679 "process": { 00:19:59.679 "type": "rebuild", 00:19:59.679 "target": "spare", 00:19:59.679 "progress": { 00:19:59.679 "blocks": 24576, 00:19:59.679 "percent": 37 00:19:59.679 } 00:19:59.679 }, 00:19:59.679 "base_bdevs_list": [ 00:19:59.679 { 00:19:59.679 "name": "spare", 00:19:59.679 "uuid": "95a7d63c-0499-5746-9a53-daf4bc5090c1", 00:19:59.679 "is_configured": true, 00:19:59.679 "data_offset": 0, 00:19:59.679 "data_size": 65536 00:19:59.679 }, 00:19:59.679 { 00:19:59.679 "name": null, 00:19:59.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.679 "is_configured": false, 00:19:59.679 "data_offset": 0, 00:19:59.679 "data_size": 65536 00:19:59.679 }, 00:19:59.679 { 00:19:59.679 "name": "BaseBdev3", 00:19:59.679 "uuid": "dcc1ead0-ac6d-5980-8727-beca15306d15", 00:19:59.679 "is_configured": true, 00:19:59.679 "data_offset": 0, 00:19:59.679 "data_size": 65536 00:19:59.679 }, 00:19:59.679 { 00:19:59.679 "name": "BaseBdev4", 00:19:59.679 "uuid": "b2b69300-58d2-5ad8-b337-4aaa4ee0b6e5", 00:19:59.679 "is_configured": true, 00:19:59.679 "data_offset": 0, 00:19:59.679 "data_size": 65536 00:19:59.679 } 00:19:59.679 ] 00:19:59.679 }' 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=494 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.679 09:19:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.937 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:59.938 "name": "raid_bdev1", 00:19:59.938 "uuid": "52561047-34e9-42da-bb03-2c5f8ffbf2b8", 00:19:59.938 "strip_size_kb": 0, 00:19:59.938 "state": "online", 00:19:59.938 "raid_level": "raid1", 00:19:59.938 "superblock": false, 00:19:59.938 "num_base_bdevs": 4, 00:19:59.938 "num_base_bdevs_discovered": 3, 00:19:59.938 "num_base_bdevs_operational": 3, 00:19:59.938 "process": { 00:19:59.938 "type": "rebuild", 00:19:59.938 "target": "spare", 00:19:59.938 "progress": { 00:19:59.938 "blocks": 26624, 00:19:59.938 "percent": 40 00:19:59.938 } 00:19:59.938 }, 00:19:59.938 "base_bdevs_list": [ 00:19:59.938 { 00:19:59.938 "name": "spare", 00:19:59.938 "uuid": "95a7d63c-0499-5746-9a53-daf4bc5090c1", 00:19:59.938 "is_configured": true, 00:19:59.938 "data_offset": 0, 00:19:59.938 "data_size": 65536 00:19:59.938 }, 00:19:59.938 { 00:19:59.938 "name": null, 00:19:59.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.938 "is_configured": false, 00:19:59.938 "data_offset": 0, 00:19:59.938 "data_size": 65536 00:19:59.938 }, 00:19:59.938 { 00:19:59.938 "name": "BaseBdev3", 00:19:59.938 "uuid": "dcc1ead0-ac6d-5980-8727-beca15306d15", 00:19:59.938 "is_configured": true, 00:19:59.938 "data_offset": 0, 00:19:59.938 "data_size": 65536 00:19:59.938 }, 00:19:59.938 { 00:19:59.938 "name": "BaseBdev4", 00:19:59.938 "uuid": "b2b69300-58d2-5ad8-b337-4aaa4ee0b6e5", 00:19:59.938 "is_configured": true, 00:19:59.938 "data_offset": 0, 00:19:59.938 "data_size": 65536 00:19:59.938 } 00:19:59.938 ] 00:19:59.938 }' 00:19:59.938 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:59.938 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:59.938 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:59.938 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:59.938 09:19:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:00.873 09:19:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:00.873 09:19:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.873 09:19:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.873 09:19:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:00.873 09:19:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:00.873 09:19:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.873 09:19:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.873 09:19:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.873 09:19:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.873 09:19:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.873 09:19:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.132 09:19:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:01.132 "name": "raid_bdev1", 00:20:01.132 "uuid": "52561047-34e9-42da-bb03-2c5f8ffbf2b8", 00:20:01.132 "strip_size_kb": 0, 00:20:01.132 "state": "online", 00:20:01.132 "raid_level": "raid1", 00:20:01.132 "superblock": false, 00:20:01.132 "num_base_bdevs": 4, 00:20:01.132 "num_base_bdevs_discovered": 3, 00:20:01.133 "num_base_bdevs_operational": 3, 00:20:01.133 "process": { 00:20:01.133 "type": "rebuild", 00:20:01.133 "target": "spare", 00:20:01.133 "progress": { 00:20:01.133 "blocks": 51200, 00:20:01.133 "percent": 78 00:20:01.133 } 00:20:01.133 }, 00:20:01.133 "base_bdevs_list": [ 00:20:01.133 { 00:20:01.133 "name": "spare", 00:20:01.133 "uuid": "95a7d63c-0499-5746-9a53-daf4bc5090c1", 00:20:01.133 "is_configured": true, 00:20:01.133 "data_offset": 0, 00:20:01.133 "data_size": 65536 00:20:01.133 }, 00:20:01.133 { 00:20:01.133 "name": null, 00:20:01.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.133 "is_configured": false, 00:20:01.133 "data_offset": 0, 00:20:01.133 "data_size": 65536 00:20:01.133 }, 00:20:01.133 { 00:20:01.133 "name": "BaseBdev3", 00:20:01.133 "uuid": "dcc1ead0-ac6d-5980-8727-beca15306d15", 00:20:01.133 "is_configured": true, 00:20:01.133 "data_offset": 0, 00:20:01.133 "data_size": 65536 00:20:01.133 }, 00:20:01.133 { 00:20:01.133 "name": "BaseBdev4", 00:20:01.133 "uuid": "b2b69300-58d2-5ad8-b337-4aaa4ee0b6e5", 00:20:01.133 "is_configured": true, 00:20:01.133 "data_offset": 0, 00:20:01.133 "data_size": 65536 00:20:01.133 } 00:20:01.133 ] 00:20:01.133 }' 00:20:01.133 09:19:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:01.133 09:19:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:01.133 09:19:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:01.133 09:19:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:01.133 09:19:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:01.725 [2024-10-15 09:19:45.436339] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:01.725 [2024-10-15 09:19:45.436476] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:01.725 [2024-10-15 09:19:45.436569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.293 09:19:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:02.293 09:19:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:02.293 09:19:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.293 09:19:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:02.293 09:19:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:02.293 09:19:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.293 09:19:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.293 09:19:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.293 09:19:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.293 09:19:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.293 09:19:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.293 09:19:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.293 "name": "raid_bdev1", 00:20:02.293 "uuid": "52561047-34e9-42da-bb03-2c5f8ffbf2b8", 00:20:02.293 "strip_size_kb": 0, 00:20:02.293 "state": "online", 00:20:02.293 "raid_level": "raid1", 00:20:02.293 "superblock": false, 00:20:02.293 "num_base_bdevs": 4, 00:20:02.293 "num_base_bdevs_discovered": 3, 00:20:02.293 "num_base_bdevs_operational": 3, 00:20:02.293 "base_bdevs_list": [ 00:20:02.293 { 00:20:02.293 "name": "spare", 00:20:02.293 "uuid": "95a7d63c-0499-5746-9a53-daf4bc5090c1", 00:20:02.293 "is_configured": true, 00:20:02.293 "data_offset": 0, 00:20:02.293 "data_size": 65536 00:20:02.293 }, 00:20:02.293 { 00:20:02.293 "name": null, 00:20:02.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.293 "is_configured": false, 00:20:02.293 "data_offset": 0, 00:20:02.293 "data_size": 65536 00:20:02.293 }, 00:20:02.293 { 00:20:02.293 "name": "BaseBdev3", 00:20:02.293 "uuid": "dcc1ead0-ac6d-5980-8727-beca15306d15", 00:20:02.293 "is_configured": true, 00:20:02.293 "data_offset": 0, 00:20:02.293 "data_size": 65536 00:20:02.293 }, 00:20:02.293 { 00:20:02.293 "name": "BaseBdev4", 00:20:02.293 "uuid": "b2b69300-58d2-5ad8-b337-4aaa4ee0b6e5", 00:20:02.293 "is_configured": true, 00:20:02.293 "data_offset": 0, 00:20:02.293 "data_size": 65536 00:20:02.293 } 00:20:02.293 ] 00:20:02.293 }' 00:20:02.293 09:19:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.293 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:02.293 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.293 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:02.293 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:20:02.293 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:02.293 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.293 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:02.293 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:02.293 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.293 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.293 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.293 09:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.293 09:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.293 09:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.293 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.293 "name": "raid_bdev1", 00:20:02.293 "uuid": "52561047-34e9-42da-bb03-2c5f8ffbf2b8", 00:20:02.293 "strip_size_kb": 0, 00:20:02.293 "state": "online", 00:20:02.293 "raid_level": "raid1", 00:20:02.293 "superblock": false, 00:20:02.293 "num_base_bdevs": 4, 00:20:02.293 "num_base_bdevs_discovered": 3, 00:20:02.293 "num_base_bdevs_operational": 3, 00:20:02.293 "base_bdevs_list": [ 00:20:02.293 { 00:20:02.293 "name": "spare", 00:20:02.293 "uuid": "95a7d63c-0499-5746-9a53-daf4bc5090c1", 00:20:02.293 "is_configured": true, 00:20:02.293 "data_offset": 0, 00:20:02.293 "data_size": 65536 00:20:02.293 }, 00:20:02.293 { 00:20:02.293 "name": null, 00:20:02.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.293 "is_configured": false, 00:20:02.293 "data_offset": 0, 00:20:02.293 "data_size": 65536 00:20:02.293 }, 00:20:02.293 { 00:20:02.293 "name": "BaseBdev3", 00:20:02.293 "uuid": "dcc1ead0-ac6d-5980-8727-beca15306d15", 00:20:02.293 "is_configured": true, 00:20:02.293 "data_offset": 0, 00:20:02.293 "data_size": 65536 00:20:02.293 }, 00:20:02.293 { 00:20:02.293 "name": "BaseBdev4", 00:20:02.293 "uuid": "b2b69300-58d2-5ad8-b337-4aaa4ee0b6e5", 00:20:02.293 "is_configured": true, 00:20:02.293 "data_offset": 0, 00:20:02.293 "data_size": 65536 00:20:02.293 } 00:20:02.293 ] 00:20:02.293 }' 00:20:02.293 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.293 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:02.293 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.552 "name": "raid_bdev1", 00:20:02.552 "uuid": "52561047-34e9-42da-bb03-2c5f8ffbf2b8", 00:20:02.552 "strip_size_kb": 0, 00:20:02.552 "state": "online", 00:20:02.552 "raid_level": "raid1", 00:20:02.552 "superblock": false, 00:20:02.552 "num_base_bdevs": 4, 00:20:02.552 "num_base_bdevs_discovered": 3, 00:20:02.552 "num_base_bdevs_operational": 3, 00:20:02.552 "base_bdevs_list": [ 00:20:02.552 { 00:20:02.552 "name": "spare", 00:20:02.552 "uuid": "95a7d63c-0499-5746-9a53-daf4bc5090c1", 00:20:02.552 "is_configured": true, 00:20:02.552 "data_offset": 0, 00:20:02.552 "data_size": 65536 00:20:02.552 }, 00:20:02.552 { 00:20:02.552 "name": null, 00:20:02.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.552 "is_configured": false, 00:20:02.552 "data_offset": 0, 00:20:02.552 "data_size": 65536 00:20:02.552 }, 00:20:02.552 { 00:20:02.552 "name": "BaseBdev3", 00:20:02.552 "uuid": "dcc1ead0-ac6d-5980-8727-beca15306d15", 00:20:02.552 "is_configured": true, 00:20:02.552 "data_offset": 0, 00:20:02.552 "data_size": 65536 00:20:02.552 }, 00:20:02.552 { 00:20:02.552 "name": "BaseBdev4", 00:20:02.552 "uuid": "b2b69300-58d2-5ad8-b337-4aaa4ee0b6e5", 00:20:02.552 "is_configured": true, 00:20:02.552 "data_offset": 0, 00:20:02.552 "data_size": 65536 00:20:02.552 } 00:20:02.552 ] 00:20:02.552 }' 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.552 09:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.810 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:02.810 09:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.810 09:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.068 [2024-10-15 09:19:46.738270] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:03.068 [2024-10-15 09:19:46.738335] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:03.068 [2024-10-15 09:19:46.738465] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.068 [2024-10-15 09:19:46.738593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:03.068 [2024-10-15 09:19:46.738612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:03.068 09:19:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:03.326 /dev/nbd0 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:03.326 1+0 records in 00:20:03.326 1+0 records out 00:20:03.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345535 s, 11.9 MB/s 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:03.326 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:03.584 /dev/nbd1 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:03.585 1+0 records in 00:20:03.585 1+0 records out 00:20:03.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584638 s, 7.0 MB/s 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:03.585 09:19:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:03.843 09:19:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:03.843 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:03.843 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:03.843 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:03.843 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:03.843 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:03.843 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:04.156 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:04.156 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:04.156 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:04.156 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.156 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.156 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:04.156 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:04.156 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.156 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:04.156 09:19:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 78076 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 78076 ']' 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 78076 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78076 00:20:04.415 killing process with pid 78076 00:20:04.415 Received shutdown signal, test time was about 60.000000 seconds 00:20:04.415 00:20:04.415 Latency(us) 00:20:04.415 [2024-10-15T09:19:48.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.415 [2024-10-15T09:19:48.343Z] =================================================================================================================== 00:20:04.415 [2024-10-15T09:19:48.343Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78076' 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 78076 00:20:04.415 [2024-10-15 09:19:48.252701] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:04.415 09:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 78076 00:20:04.981 [2024-10-15 09:19:48.730855] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:06.355 ************************************ 00:20:06.355 END TEST raid_rebuild_test 00:20:06.355 ************************************ 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:20:06.355 00:20:06.355 real 0m21.479s 00:20:06.355 user 0m23.927s 00:20:06.355 sys 0m3.667s 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.355 09:19:49 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:20:06.355 09:19:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:06.355 09:19:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:06.355 09:19:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:06.355 ************************************ 00:20:06.355 START TEST raid_rebuild_test_sb 00:20:06.355 ************************************ 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78558 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78558 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 78558 ']' 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:06.355 09:19:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.355 [2024-10-15 09:19:50.036713] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:20:06.355 [2024-10-15 09:19:50.037442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78558 ] 00:20:06.355 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:06.355 Zero copy mechanism will not be used. 00:20:06.355 [2024-10-15 09:19:50.221589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.614 [2024-10-15 09:19:50.390396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.885 [2024-10-15 09:19:50.613666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.885 [2024-10-15 09:19:50.613744] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:07.156 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:07.156 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:20:07.156 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:07.156 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:07.156 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.156 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.156 BaseBdev1_malloc 00:20:07.156 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.156 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:07.156 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.156 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.156 [2024-10-15 09:19:51.068009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:07.156 [2024-10-15 09:19:51.068131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.156 [2024-10-15 09:19:51.068173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:07.156 [2024-10-15 09:19:51.068228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.156 [2024-10-15 09:19:51.071383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.156 [2024-10-15 09:19:51.071585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:07.156 BaseBdev1 00:20:07.156 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.156 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:07.156 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:07.156 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.156 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.414 BaseBdev2_malloc 00:20:07.414 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.414 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:07.414 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.414 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.414 [2024-10-15 09:19:51.127909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:07.414 [2024-10-15 09:19:51.128009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.415 [2024-10-15 09:19:51.128044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:07.415 [2024-10-15 09:19:51.128064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.415 [2024-10-15 09:19:51.131198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.415 [2024-10-15 09:19:51.131252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:07.415 BaseBdev2 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.415 BaseBdev3_malloc 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.415 [2024-10-15 09:19:51.195006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:07.415 [2024-10-15 09:19:51.195281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.415 [2024-10-15 09:19:51.195333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:07.415 [2024-10-15 09:19:51.195355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.415 [2024-10-15 09:19:51.198416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.415 [2024-10-15 09:19:51.198597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:07.415 BaseBdev3 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.415 BaseBdev4_malloc 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.415 [2024-10-15 09:19:51.255110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:07.415 [2024-10-15 09:19:51.255231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.415 [2024-10-15 09:19:51.255271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:07.415 [2024-10-15 09:19:51.255291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.415 [2024-10-15 09:19:51.258425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.415 [2024-10-15 09:19:51.258483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:07.415 BaseBdev4 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.415 spare_malloc 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.415 spare_delay 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.415 [2024-10-15 09:19:51.323539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:07.415 [2024-10-15 09:19:51.323636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.415 [2024-10-15 09:19:51.323671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:07.415 [2024-10-15 09:19:51.323691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.415 [2024-10-15 09:19:51.326733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.415 [2024-10-15 09:19:51.326786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:07.415 spare 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.415 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.415 [2024-10-15 09:19:51.335731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:07.415 [2024-10-15 09:19:51.338372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:07.415 [2024-10-15 09:19:51.338473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:07.415 [2024-10-15 09:19:51.338558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:07.415 [2024-10-15 09:19:51.338834] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:07.415 [2024-10-15 09:19:51.338860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:07.415 [2024-10-15 09:19:51.339244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:07.415 [2024-10-15 09:19:51.339497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:07.415 [2024-10-15 09:19:51.339519] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:07.415 [2024-10-15 09:19:51.339793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.673 "name": "raid_bdev1", 00:20:07.673 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:07.673 "strip_size_kb": 0, 00:20:07.673 "state": "online", 00:20:07.673 "raid_level": "raid1", 00:20:07.673 "superblock": true, 00:20:07.673 "num_base_bdevs": 4, 00:20:07.673 "num_base_bdevs_discovered": 4, 00:20:07.673 "num_base_bdevs_operational": 4, 00:20:07.673 "base_bdevs_list": [ 00:20:07.673 { 00:20:07.673 "name": "BaseBdev1", 00:20:07.673 "uuid": "b760c0b9-72ad-520c-9027-8708ab8f69c4", 00:20:07.673 "is_configured": true, 00:20:07.673 "data_offset": 2048, 00:20:07.673 "data_size": 63488 00:20:07.673 }, 00:20:07.673 { 00:20:07.673 "name": "BaseBdev2", 00:20:07.673 "uuid": "c70f7a7e-9da0-5bd1-b24e-07de3d4cee16", 00:20:07.673 "is_configured": true, 00:20:07.673 "data_offset": 2048, 00:20:07.673 "data_size": 63488 00:20:07.673 }, 00:20:07.673 { 00:20:07.673 "name": "BaseBdev3", 00:20:07.673 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:07.673 "is_configured": true, 00:20:07.673 "data_offset": 2048, 00:20:07.673 "data_size": 63488 00:20:07.673 }, 00:20:07.673 { 00:20:07.673 "name": "BaseBdev4", 00:20:07.673 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:07.673 "is_configured": true, 00:20:07.673 "data_offset": 2048, 00:20:07.673 "data_size": 63488 00:20:07.673 } 00:20:07.673 ] 00:20:07.673 }' 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.673 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.242 [2024-10-15 09:19:51.868385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:08.242 09:19:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:08.501 [2024-10-15 09:19:52.196097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:08.501 /dev/nbd0 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:08.501 1+0 records in 00:20:08.501 1+0 records out 00:20:08.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689118 s, 5.9 MB/s 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:08.501 09:19:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:18.479 63488+0 records in 00:20:18.479 63488+0 records out 00:20:18.479 32505856 bytes (33 MB, 31 MiB) copied, 8.28884 s, 3.9 MB/s 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:18.479 [2024-10-15 09:20:00.803813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.479 [2024-10-15 09:20:00.831975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.479 "name": "raid_bdev1", 00:20:18.479 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:18.479 "strip_size_kb": 0, 00:20:18.479 "state": "online", 00:20:18.479 "raid_level": "raid1", 00:20:18.479 "superblock": true, 00:20:18.479 "num_base_bdevs": 4, 00:20:18.479 "num_base_bdevs_discovered": 3, 00:20:18.479 "num_base_bdevs_operational": 3, 00:20:18.479 "base_bdevs_list": [ 00:20:18.479 { 00:20:18.479 "name": null, 00:20:18.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.479 "is_configured": false, 00:20:18.479 "data_offset": 0, 00:20:18.479 "data_size": 63488 00:20:18.479 }, 00:20:18.479 { 00:20:18.479 "name": "BaseBdev2", 00:20:18.479 "uuid": "c70f7a7e-9da0-5bd1-b24e-07de3d4cee16", 00:20:18.479 "is_configured": true, 00:20:18.479 "data_offset": 2048, 00:20:18.479 "data_size": 63488 00:20:18.479 }, 00:20:18.479 { 00:20:18.479 "name": "BaseBdev3", 00:20:18.479 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:18.479 "is_configured": true, 00:20:18.479 "data_offset": 2048, 00:20:18.479 "data_size": 63488 00:20:18.479 }, 00:20:18.479 { 00:20:18.479 "name": "BaseBdev4", 00:20:18.479 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:18.479 "is_configured": true, 00:20:18.479 "data_offset": 2048, 00:20:18.479 "data_size": 63488 00:20:18.479 } 00:20:18.479 ] 00:20:18.479 }' 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.479 09:20:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.479 09:20:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:18.479 09:20:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.479 09:20:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.479 [2024-10-15 09:20:01.328113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:18.479 [2024-10-15 09:20:01.343571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:20:18.479 09:20:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.479 09:20:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:18.479 [2024-10-15 09:20:01.346405] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:18.479 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.479 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.479 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:18.479 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:18.479 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.479 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.479 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.479 09:20:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.479 09:20:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.479 09:20:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.479 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.479 "name": "raid_bdev1", 00:20:18.479 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:18.479 "strip_size_kb": 0, 00:20:18.479 "state": "online", 00:20:18.479 "raid_level": "raid1", 00:20:18.479 "superblock": true, 00:20:18.479 "num_base_bdevs": 4, 00:20:18.479 "num_base_bdevs_discovered": 4, 00:20:18.479 "num_base_bdevs_operational": 4, 00:20:18.479 "process": { 00:20:18.479 "type": "rebuild", 00:20:18.479 "target": "spare", 00:20:18.479 "progress": { 00:20:18.479 "blocks": 20480, 00:20:18.479 "percent": 32 00:20:18.479 } 00:20:18.479 }, 00:20:18.479 "base_bdevs_list": [ 00:20:18.479 { 00:20:18.479 "name": "spare", 00:20:18.479 "uuid": "43555928-4b30-56b3-9624-52e1d016cc7e", 00:20:18.479 "is_configured": true, 00:20:18.479 "data_offset": 2048, 00:20:18.479 "data_size": 63488 00:20:18.479 }, 00:20:18.479 { 00:20:18.479 "name": "BaseBdev2", 00:20:18.479 "uuid": "c70f7a7e-9da0-5bd1-b24e-07de3d4cee16", 00:20:18.479 "is_configured": true, 00:20:18.479 "data_offset": 2048, 00:20:18.479 "data_size": 63488 00:20:18.479 }, 00:20:18.479 { 00:20:18.479 "name": "BaseBdev3", 00:20:18.479 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:18.479 "is_configured": true, 00:20:18.479 "data_offset": 2048, 00:20:18.479 "data_size": 63488 00:20:18.479 }, 00:20:18.479 { 00:20:18.479 "name": "BaseBdev4", 00:20:18.479 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:18.479 "is_configured": true, 00:20:18.479 "data_offset": 2048, 00:20:18.479 "data_size": 63488 00:20:18.479 } 00:20:18.479 ] 00:20:18.479 }' 00:20:18.479 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.738 [2024-10-15 09:20:02.504867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:18.738 [2024-10-15 09:20:02.558662] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:18.738 [2024-10-15 09:20:02.559055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.738 [2024-10-15 09:20:02.559091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:18.738 [2024-10-15 09:20:02.559109] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.738 "name": "raid_bdev1", 00:20:18.738 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:18.738 "strip_size_kb": 0, 00:20:18.738 "state": "online", 00:20:18.738 "raid_level": "raid1", 00:20:18.738 "superblock": true, 00:20:18.738 "num_base_bdevs": 4, 00:20:18.738 "num_base_bdevs_discovered": 3, 00:20:18.738 "num_base_bdevs_operational": 3, 00:20:18.738 "base_bdevs_list": [ 00:20:18.738 { 00:20:18.738 "name": null, 00:20:18.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.738 "is_configured": false, 00:20:18.738 "data_offset": 0, 00:20:18.738 "data_size": 63488 00:20:18.738 }, 00:20:18.738 { 00:20:18.738 "name": "BaseBdev2", 00:20:18.738 "uuid": "c70f7a7e-9da0-5bd1-b24e-07de3d4cee16", 00:20:18.738 "is_configured": true, 00:20:18.738 "data_offset": 2048, 00:20:18.738 "data_size": 63488 00:20:18.738 }, 00:20:18.738 { 00:20:18.738 "name": "BaseBdev3", 00:20:18.738 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:18.738 "is_configured": true, 00:20:18.738 "data_offset": 2048, 00:20:18.738 "data_size": 63488 00:20:18.738 }, 00:20:18.738 { 00:20:18.738 "name": "BaseBdev4", 00:20:18.738 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:18.738 "is_configured": true, 00:20:18.738 "data_offset": 2048, 00:20:18.738 "data_size": 63488 00:20:18.738 } 00:20:18.738 ] 00:20:18.738 }' 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.738 09:20:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.306 09:20:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:19.306 09:20:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.306 09:20:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:19.306 09:20:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:19.306 09:20:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.306 09:20:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.306 09:20:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.306 09:20:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.306 09:20:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.306 09:20:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.306 09:20:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.306 "name": "raid_bdev1", 00:20:19.306 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:19.306 "strip_size_kb": 0, 00:20:19.306 "state": "online", 00:20:19.306 "raid_level": "raid1", 00:20:19.306 "superblock": true, 00:20:19.306 "num_base_bdevs": 4, 00:20:19.306 "num_base_bdevs_discovered": 3, 00:20:19.306 "num_base_bdevs_operational": 3, 00:20:19.306 "base_bdevs_list": [ 00:20:19.306 { 00:20:19.306 "name": null, 00:20:19.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.306 "is_configured": false, 00:20:19.306 "data_offset": 0, 00:20:19.306 "data_size": 63488 00:20:19.306 }, 00:20:19.306 { 00:20:19.306 "name": "BaseBdev2", 00:20:19.306 "uuid": "c70f7a7e-9da0-5bd1-b24e-07de3d4cee16", 00:20:19.306 "is_configured": true, 00:20:19.306 "data_offset": 2048, 00:20:19.306 "data_size": 63488 00:20:19.306 }, 00:20:19.306 { 00:20:19.306 "name": "BaseBdev3", 00:20:19.306 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:19.306 "is_configured": true, 00:20:19.306 "data_offset": 2048, 00:20:19.306 "data_size": 63488 00:20:19.306 }, 00:20:19.306 { 00:20:19.306 "name": "BaseBdev4", 00:20:19.306 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:19.306 "is_configured": true, 00:20:19.306 "data_offset": 2048, 00:20:19.306 "data_size": 63488 00:20:19.306 } 00:20:19.306 ] 00:20:19.306 }' 00:20:19.306 09:20:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.306 09:20:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:19.306 09:20:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.565 09:20:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:19.565 09:20:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:19.565 09:20:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.565 09:20:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.565 [2024-10-15 09:20:03.252234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:19.565 [2024-10-15 09:20:03.266701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:20:19.565 09:20:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.565 09:20:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:19.565 [2024-10-15 09:20:03.269632] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:20.500 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.500 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.500 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.500 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.500 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.500 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.500 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.500 09:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.500 09:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.500 09:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.500 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.500 "name": "raid_bdev1", 00:20:20.500 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:20.500 "strip_size_kb": 0, 00:20:20.500 "state": "online", 00:20:20.500 "raid_level": "raid1", 00:20:20.500 "superblock": true, 00:20:20.500 "num_base_bdevs": 4, 00:20:20.500 "num_base_bdevs_discovered": 4, 00:20:20.500 "num_base_bdevs_operational": 4, 00:20:20.500 "process": { 00:20:20.500 "type": "rebuild", 00:20:20.500 "target": "spare", 00:20:20.500 "progress": { 00:20:20.500 "blocks": 18432, 00:20:20.500 "percent": 29 00:20:20.500 } 00:20:20.500 }, 00:20:20.500 "base_bdevs_list": [ 00:20:20.500 { 00:20:20.500 "name": "spare", 00:20:20.500 "uuid": "43555928-4b30-56b3-9624-52e1d016cc7e", 00:20:20.500 "is_configured": true, 00:20:20.500 "data_offset": 2048, 00:20:20.500 "data_size": 63488 00:20:20.500 }, 00:20:20.500 { 00:20:20.500 "name": "BaseBdev2", 00:20:20.500 "uuid": "c70f7a7e-9da0-5bd1-b24e-07de3d4cee16", 00:20:20.500 "is_configured": true, 00:20:20.500 "data_offset": 2048, 00:20:20.500 "data_size": 63488 00:20:20.500 }, 00:20:20.500 { 00:20:20.500 "name": "BaseBdev3", 00:20:20.500 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:20.500 "is_configured": true, 00:20:20.500 "data_offset": 2048, 00:20:20.500 "data_size": 63488 00:20:20.500 }, 00:20:20.500 { 00:20:20.500 "name": "BaseBdev4", 00:20:20.500 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:20.500 "is_configured": true, 00:20:20.500 "data_offset": 2048, 00:20:20.500 "data_size": 63488 00:20:20.500 } 00:20:20.500 ] 00:20:20.500 }' 00:20:20.500 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.500 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.500 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:20.759 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.759 [2024-10-15 09:20:04.440070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:20.759 [2024-10-15 09:20:04.582399] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.759 "name": "raid_bdev1", 00:20:20.759 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:20.759 "strip_size_kb": 0, 00:20:20.759 "state": "online", 00:20:20.759 "raid_level": "raid1", 00:20:20.759 "superblock": true, 00:20:20.759 "num_base_bdevs": 4, 00:20:20.759 "num_base_bdevs_discovered": 3, 00:20:20.759 "num_base_bdevs_operational": 3, 00:20:20.759 "process": { 00:20:20.759 "type": "rebuild", 00:20:20.759 "target": "spare", 00:20:20.759 "progress": { 00:20:20.759 "blocks": 24576, 00:20:20.759 "percent": 38 00:20:20.759 } 00:20:20.759 }, 00:20:20.759 "base_bdevs_list": [ 00:20:20.759 { 00:20:20.759 "name": "spare", 00:20:20.759 "uuid": "43555928-4b30-56b3-9624-52e1d016cc7e", 00:20:20.759 "is_configured": true, 00:20:20.759 "data_offset": 2048, 00:20:20.759 "data_size": 63488 00:20:20.759 }, 00:20:20.759 { 00:20:20.759 "name": null, 00:20:20.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.759 "is_configured": false, 00:20:20.759 "data_offset": 0, 00:20:20.759 "data_size": 63488 00:20:20.759 }, 00:20:20.759 { 00:20:20.759 "name": "BaseBdev3", 00:20:20.759 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:20.759 "is_configured": true, 00:20:20.759 "data_offset": 2048, 00:20:20.759 "data_size": 63488 00:20:20.759 }, 00:20:20.759 { 00:20:20.759 "name": "BaseBdev4", 00:20:20.759 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:20.759 "is_configured": true, 00:20:20.759 "data_offset": 2048, 00:20:20.759 "data_size": 63488 00:20:20.759 } 00:20:20.759 ] 00:20:20.759 }' 00:20:20.759 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=515 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.018 "name": "raid_bdev1", 00:20:21.018 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:21.018 "strip_size_kb": 0, 00:20:21.018 "state": "online", 00:20:21.018 "raid_level": "raid1", 00:20:21.018 "superblock": true, 00:20:21.018 "num_base_bdevs": 4, 00:20:21.018 "num_base_bdevs_discovered": 3, 00:20:21.018 "num_base_bdevs_operational": 3, 00:20:21.018 "process": { 00:20:21.018 "type": "rebuild", 00:20:21.018 "target": "spare", 00:20:21.018 "progress": { 00:20:21.018 "blocks": 26624, 00:20:21.018 "percent": 41 00:20:21.018 } 00:20:21.018 }, 00:20:21.018 "base_bdevs_list": [ 00:20:21.018 { 00:20:21.018 "name": "spare", 00:20:21.018 "uuid": "43555928-4b30-56b3-9624-52e1d016cc7e", 00:20:21.018 "is_configured": true, 00:20:21.018 "data_offset": 2048, 00:20:21.018 "data_size": 63488 00:20:21.018 }, 00:20:21.018 { 00:20:21.018 "name": null, 00:20:21.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.018 "is_configured": false, 00:20:21.018 "data_offset": 0, 00:20:21.018 "data_size": 63488 00:20:21.018 }, 00:20:21.018 { 00:20:21.018 "name": "BaseBdev3", 00:20:21.018 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:21.018 "is_configured": true, 00:20:21.018 "data_offset": 2048, 00:20:21.018 "data_size": 63488 00:20:21.018 }, 00:20:21.018 { 00:20:21.018 "name": "BaseBdev4", 00:20:21.018 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:21.018 "is_configured": true, 00:20:21.018 "data_offset": 2048, 00:20:21.018 "data_size": 63488 00:20:21.018 } 00:20:21.018 ] 00:20:21.018 }' 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.018 09:20:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:22.414 09:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:22.414 09:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:22.414 09:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.414 09:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:22.414 09:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:22.414 09:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.414 09:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.414 09:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.414 09:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.414 09:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.414 09:20:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.414 09:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.414 "name": "raid_bdev1", 00:20:22.414 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:22.414 "strip_size_kb": 0, 00:20:22.414 "state": "online", 00:20:22.414 "raid_level": "raid1", 00:20:22.414 "superblock": true, 00:20:22.414 "num_base_bdevs": 4, 00:20:22.414 "num_base_bdevs_discovered": 3, 00:20:22.414 "num_base_bdevs_operational": 3, 00:20:22.414 "process": { 00:20:22.414 "type": "rebuild", 00:20:22.414 "target": "spare", 00:20:22.414 "progress": { 00:20:22.414 "blocks": 51200, 00:20:22.414 "percent": 80 00:20:22.414 } 00:20:22.414 }, 00:20:22.414 "base_bdevs_list": [ 00:20:22.414 { 00:20:22.414 "name": "spare", 00:20:22.414 "uuid": "43555928-4b30-56b3-9624-52e1d016cc7e", 00:20:22.414 "is_configured": true, 00:20:22.414 "data_offset": 2048, 00:20:22.414 "data_size": 63488 00:20:22.414 }, 00:20:22.414 { 00:20:22.414 "name": null, 00:20:22.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.414 "is_configured": false, 00:20:22.414 "data_offset": 0, 00:20:22.414 "data_size": 63488 00:20:22.414 }, 00:20:22.414 { 00:20:22.414 "name": "BaseBdev3", 00:20:22.414 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:22.414 "is_configured": true, 00:20:22.414 "data_offset": 2048, 00:20:22.414 "data_size": 63488 00:20:22.414 }, 00:20:22.414 { 00:20:22.414 "name": "BaseBdev4", 00:20:22.414 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:22.414 "is_configured": true, 00:20:22.414 "data_offset": 2048, 00:20:22.414 "data_size": 63488 00:20:22.414 } 00:20:22.414 ] 00:20:22.414 }' 00:20:22.414 09:20:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.414 09:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:22.414 09:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.414 09:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:22.414 09:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:22.673 [2024-10-15 09:20:06.501678] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:22.673 [2024-10-15 09:20:06.501799] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:22.673 [2024-10-15 09:20:06.502017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.240 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:23.240 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:23.240 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.240 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:23.240 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:23.240 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.240 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.240 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.240 09:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.240 09:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.240 09:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.240 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.240 "name": "raid_bdev1", 00:20:23.240 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:23.240 "strip_size_kb": 0, 00:20:23.240 "state": "online", 00:20:23.240 "raid_level": "raid1", 00:20:23.240 "superblock": true, 00:20:23.240 "num_base_bdevs": 4, 00:20:23.240 "num_base_bdevs_discovered": 3, 00:20:23.240 "num_base_bdevs_operational": 3, 00:20:23.240 "base_bdevs_list": [ 00:20:23.240 { 00:20:23.240 "name": "spare", 00:20:23.240 "uuid": "43555928-4b30-56b3-9624-52e1d016cc7e", 00:20:23.240 "is_configured": true, 00:20:23.240 "data_offset": 2048, 00:20:23.240 "data_size": 63488 00:20:23.240 }, 00:20:23.240 { 00:20:23.240 "name": null, 00:20:23.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.240 "is_configured": false, 00:20:23.240 "data_offset": 0, 00:20:23.240 "data_size": 63488 00:20:23.240 }, 00:20:23.240 { 00:20:23.240 "name": "BaseBdev3", 00:20:23.240 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:23.240 "is_configured": true, 00:20:23.240 "data_offset": 2048, 00:20:23.240 "data_size": 63488 00:20:23.240 }, 00:20:23.240 { 00:20:23.240 "name": "BaseBdev4", 00:20:23.240 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:23.240 "is_configured": true, 00:20:23.240 "data_offset": 2048, 00:20:23.240 "data_size": 63488 00:20:23.240 } 00:20:23.240 ] 00:20:23.240 }' 00:20:23.240 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.500 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:23.500 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.500 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:23.500 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:20:23.500 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:23.500 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.500 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:23.500 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:23.500 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.500 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.500 09:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.500 09:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.500 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.500 09:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.500 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.500 "name": "raid_bdev1", 00:20:23.500 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:23.500 "strip_size_kb": 0, 00:20:23.500 "state": "online", 00:20:23.500 "raid_level": "raid1", 00:20:23.500 "superblock": true, 00:20:23.500 "num_base_bdevs": 4, 00:20:23.500 "num_base_bdevs_discovered": 3, 00:20:23.500 "num_base_bdevs_operational": 3, 00:20:23.500 "base_bdevs_list": [ 00:20:23.500 { 00:20:23.500 "name": "spare", 00:20:23.500 "uuid": "43555928-4b30-56b3-9624-52e1d016cc7e", 00:20:23.500 "is_configured": true, 00:20:23.500 "data_offset": 2048, 00:20:23.500 "data_size": 63488 00:20:23.500 }, 00:20:23.500 { 00:20:23.500 "name": null, 00:20:23.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.500 "is_configured": false, 00:20:23.500 "data_offset": 0, 00:20:23.500 "data_size": 63488 00:20:23.500 }, 00:20:23.500 { 00:20:23.500 "name": "BaseBdev3", 00:20:23.500 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:23.500 "is_configured": true, 00:20:23.500 "data_offset": 2048, 00:20:23.500 "data_size": 63488 00:20:23.500 }, 00:20:23.500 { 00:20:23.500 "name": "BaseBdev4", 00:20:23.500 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:23.500 "is_configured": true, 00:20:23.500 "data_offset": 2048, 00:20:23.500 "data_size": 63488 00:20:23.500 } 00:20:23.500 ] 00:20:23.500 }' 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.501 09:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.760 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.760 "name": "raid_bdev1", 00:20:23.760 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:23.760 "strip_size_kb": 0, 00:20:23.760 "state": "online", 00:20:23.760 "raid_level": "raid1", 00:20:23.760 "superblock": true, 00:20:23.760 "num_base_bdevs": 4, 00:20:23.760 "num_base_bdevs_discovered": 3, 00:20:23.760 "num_base_bdevs_operational": 3, 00:20:23.760 "base_bdevs_list": [ 00:20:23.760 { 00:20:23.760 "name": "spare", 00:20:23.760 "uuid": "43555928-4b30-56b3-9624-52e1d016cc7e", 00:20:23.760 "is_configured": true, 00:20:23.760 "data_offset": 2048, 00:20:23.760 "data_size": 63488 00:20:23.760 }, 00:20:23.760 { 00:20:23.760 "name": null, 00:20:23.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.760 "is_configured": false, 00:20:23.760 "data_offset": 0, 00:20:23.760 "data_size": 63488 00:20:23.760 }, 00:20:23.760 { 00:20:23.760 "name": "BaseBdev3", 00:20:23.760 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:23.760 "is_configured": true, 00:20:23.760 "data_offset": 2048, 00:20:23.760 "data_size": 63488 00:20:23.760 }, 00:20:23.760 { 00:20:23.760 "name": "BaseBdev4", 00:20:23.760 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:23.760 "is_configured": true, 00:20:23.760 "data_offset": 2048, 00:20:23.760 "data_size": 63488 00:20:23.760 } 00:20:23.760 ] 00:20:23.760 }' 00:20:23.760 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.760 09:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.019 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:24.019 09:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.019 09:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.019 [2024-10-15 09:20:07.936460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:24.019 [2024-10-15 09:20:07.936566] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:24.019 [2024-10-15 09:20:07.936687] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:24.019 [2024-10-15 09:20:07.936795] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:24.019 [2024-10-15 09:20:07.936812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:24.019 09:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.019 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.019 09:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.019 09:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.019 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:20:24.282 09:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.282 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:24.282 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:24.282 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:24.282 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:24.282 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:24.282 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:24.282 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:24.282 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:24.282 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:24.282 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:24.282 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:24.282 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:24.282 09:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:24.542 /dev/nbd0 00:20:24.542 09:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:24.542 09:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:24.542 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:24.542 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:20:24.542 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:24.542 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:24.542 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:24.542 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:20:24.542 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:24.542 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:24.542 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:24.542 1+0 records in 00:20:24.542 1+0 records out 00:20:24.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039627 s, 10.3 MB/s 00:20:24.542 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.542 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:20:24.542 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.542 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:24.542 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:20:24.542 09:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:24.543 09:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:24.543 09:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:24.802 /dev/nbd1 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:24.802 1+0 records in 00:20:24.802 1+0 records out 00:20:24.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510834 s, 8.0 MB/s 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:24.802 09:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:25.061 09:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:25.061 09:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:25.061 09:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:25.061 09:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:25.061 09:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:25.061 09:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:25.061 09:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:25.320 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:25.320 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:25.320 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:25.320 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:25.320 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:25.320 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:25.320 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:25.320 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:25.320 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:25.320 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.889 [2024-10-15 09:20:09.543708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:25.889 [2024-10-15 09:20:09.543793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.889 [2024-10-15 09:20:09.543832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:25.889 [2024-10-15 09:20:09.543850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.889 [2024-10-15 09:20:09.547042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.889 [2024-10-15 09:20:09.547092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:25.889 [2024-10-15 09:20:09.547237] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:25.889 [2024-10-15 09:20:09.547309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:25.889 [2024-10-15 09:20:09.547498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:25.889 [2024-10-15 09:20:09.547663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:25.889 spare 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.889 [2024-10-15 09:20:09.647865] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:25.889 [2024-10-15 09:20:09.647933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:25.889 [2024-10-15 09:20:09.648484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:20:25.889 [2024-10-15 09:20:09.648786] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:25.889 [2024-10-15 09:20:09.648818] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:25.889 [2024-10-15 09:20:09.649078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.889 "name": "raid_bdev1", 00:20:25.889 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:25.889 "strip_size_kb": 0, 00:20:25.889 "state": "online", 00:20:25.889 "raid_level": "raid1", 00:20:25.889 "superblock": true, 00:20:25.889 "num_base_bdevs": 4, 00:20:25.889 "num_base_bdevs_discovered": 3, 00:20:25.889 "num_base_bdevs_operational": 3, 00:20:25.889 "base_bdevs_list": [ 00:20:25.889 { 00:20:25.889 "name": "spare", 00:20:25.889 "uuid": "43555928-4b30-56b3-9624-52e1d016cc7e", 00:20:25.889 "is_configured": true, 00:20:25.889 "data_offset": 2048, 00:20:25.889 "data_size": 63488 00:20:25.889 }, 00:20:25.889 { 00:20:25.889 "name": null, 00:20:25.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.889 "is_configured": false, 00:20:25.889 "data_offset": 2048, 00:20:25.889 "data_size": 63488 00:20:25.889 }, 00:20:25.889 { 00:20:25.889 "name": "BaseBdev3", 00:20:25.889 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:25.889 "is_configured": true, 00:20:25.889 "data_offset": 2048, 00:20:25.889 "data_size": 63488 00:20:25.889 }, 00:20:25.889 { 00:20:25.889 "name": "BaseBdev4", 00:20:25.889 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:25.889 "is_configured": true, 00:20:25.889 "data_offset": 2048, 00:20:25.889 "data_size": 63488 00:20:25.889 } 00:20:25.889 ] 00:20:25.889 }' 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.889 09:20:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.456 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:26.456 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:26.456 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:26.456 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:26.456 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:26.456 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.456 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.456 09:20:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.456 09:20:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.456 09:20:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.456 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:26.456 "name": "raid_bdev1", 00:20:26.456 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:26.456 "strip_size_kb": 0, 00:20:26.456 "state": "online", 00:20:26.456 "raid_level": "raid1", 00:20:26.456 "superblock": true, 00:20:26.456 "num_base_bdevs": 4, 00:20:26.456 "num_base_bdevs_discovered": 3, 00:20:26.456 "num_base_bdevs_operational": 3, 00:20:26.456 "base_bdevs_list": [ 00:20:26.456 { 00:20:26.456 "name": "spare", 00:20:26.456 "uuid": "43555928-4b30-56b3-9624-52e1d016cc7e", 00:20:26.457 "is_configured": true, 00:20:26.457 "data_offset": 2048, 00:20:26.457 "data_size": 63488 00:20:26.457 }, 00:20:26.457 { 00:20:26.457 "name": null, 00:20:26.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.457 "is_configured": false, 00:20:26.457 "data_offset": 2048, 00:20:26.457 "data_size": 63488 00:20:26.457 }, 00:20:26.457 { 00:20:26.457 "name": "BaseBdev3", 00:20:26.457 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:26.457 "is_configured": true, 00:20:26.457 "data_offset": 2048, 00:20:26.457 "data_size": 63488 00:20:26.457 }, 00:20:26.457 { 00:20:26.457 "name": "BaseBdev4", 00:20:26.457 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:26.457 "is_configured": true, 00:20:26.457 "data_offset": 2048, 00:20:26.457 "data_size": 63488 00:20:26.457 } 00:20:26.457 ] 00:20:26.457 }' 00:20:26.457 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:26.457 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:26.457 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:26.457 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:26.457 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.457 09:20:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.457 09:20:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.457 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:26.457 09:20:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.457 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:26.457 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:26.457 09:20:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.457 09:20:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.716 [2024-10-15 09:20:10.384180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.716 "name": "raid_bdev1", 00:20:26.716 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:26.716 "strip_size_kb": 0, 00:20:26.716 "state": "online", 00:20:26.716 "raid_level": "raid1", 00:20:26.716 "superblock": true, 00:20:26.716 "num_base_bdevs": 4, 00:20:26.716 "num_base_bdevs_discovered": 2, 00:20:26.716 "num_base_bdevs_operational": 2, 00:20:26.716 "base_bdevs_list": [ 00:20:26.716 { 00:20:26.716 "name": null, 00:20:26.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.716 "is_configured": false, 00:20:26.716 "data_offset": 0, 00:20:26.716 "data_size": 63488 00:20:26.716 }, 00:20:26.716 { 00:20:26.716 "name": null, 00:20:26.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.716 "is_configured": false, 00:20:26.716 "data_offset": 2048, 00:20:26.716 "data_size": 63488 00:20:26.716 }, 00:20:26.716 { 00:20:26.716 "name": "BaseBdev3", 00:20:26.716 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:26.716 "is_configured": true, 00:20:26.716 "data_offset": 2048, 00:20:26.716 "data_size": 63488 00:20:26.716 }, 00:20:26.716 { 00:20:26.716 "name": "BaseBdev4", 00:20:26.716 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:26.716 "is_configured": true, 00:20:26.716 "data_offset": 2048, 00:20:26.716 "data_size": 63488 00:20:26.716 } 00:20:26.716 ] 00:20:26.716 }' 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.716 09:20:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.283 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:27.283 09:20:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.283 09:20:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.283 [2024-10-15 09:20:10.916298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:27.283 [2024-10-15 09:20:10.916594] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:27.283 [2024-10-15 09:20:10.916637] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:27.283 [2024-10-15 09:20:10.916713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:27.283 [2024-10-15 09:20:10.930795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:20:27.283 09:20:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.283 09:20:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:27.283 [2024-10-15 09:20:10.933800] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:28.222 09:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:28.222 09:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:28.222 09:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:28.222 09:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:28.222 09:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:28.222 09:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.222 09:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.222 09:20:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.222 09:20:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.222 09:20:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.222 09:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:28.222 "name": "raid_bdev1", 00:20:28.222 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:28.222 "strip_size_kb": 0, 00:20:28.222 "state": "online", 00:20:28.222 "raid_level": "raid1", 00:20:28.222 "superblock": true, 00:20:28.222 "num_base_bdevs": 4, 00:20:28.222 "num_base_bdevs_discovered": 3, 00:20:28.222 "num_base_bdevs_operational": 3, 00:20:28.222 "process": { 00:20:28.222 "type": "rebuild", 00:20:28.222 "target": "spare", 00:20:28.222 "progress": { 00:20:28.222 "blocks": 20480, 00:20:28.222 "percent": 32 00:20:28.222 } 00:20:28.222 }, 00:20:28.222 "base_bdevs_list": [ 00:20:28.222 { 00:20:28.222 "name": "spare", 00:20:28.222 "uuid": "43555928-4b30-56b3-9624-52e1d016cc7e", 00:20:28.222 "is_configured": true, 00:20:28.222 "data_offset": 2048, 00:20:28.222 "data_size": 63488 00:20:28.222 }, 00:20:28.222 { 00:20:28.222 "name": null, 00:20:28.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.222 "is_configured": false, 00:20:28.222 "data_offset": 2048, 00:20:28.222 "data_size": 63488 00:20:28.222 }, 00:20:28.222 { 00:20:28.222 "name": "BaseBdev3", 00:20:28.222 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:28.222 "is_configured": true, 00:20:28.222 "data_offset": 2048, 00:20:28.222 "data_size": 63488 00:20:28.222 }, 00:20:28.222 { 00:20:28.222 "name": "BaseBdev4", 00:20:28.222 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:28.222 "is_configured": true, 00:20:28.222 "data_offset": 2048, 00:20:28.222 "data_size": 63488 00:20:28.222 } 00:20:28.222 ] 00:20:28.222 }' 00:20:28.222 09:20:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:28.222 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:28.222 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.222 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.222 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:28.222 09:20:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.222 09:20:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.222 [2024-10-15 09:20:12.104290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.222 [2024-10-15 09:20:12.145833] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:28.222 [2024-10-15 09:20:12.145991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.222 [2024-10-15 09:20:12.146025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.222 [2024-10-15 09:20:12.146038] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.492 "name": "raid_bdev1", 00:20:28.492 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:28.492 "strip_size_kb": 0, 00:20:28.492 "state": "online", 00:20:28.492 "raid_level": "raid1", 00:20:28.492 "superblock": true, 00:20:28.492 "num_base_bdevs": 4, 00:20:28.492 "num_base_bdevs_discovered": 2, 00:20:28.492 "num_base_bdevs_operational": 2, 00:20:28.492 "base_bdevs_list": [ 00:20:28.492 { 00:20:28.492 "name": null, 00:20:28.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.492 "is_configured": false, 00:20:28.492 "data_offset": 0, 00:20:28.492 "data_size": 63488 00:20:28.492 }, 00:20:28.492 { 00:20:28.492 "name": null, 00:20:28.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.492 "is_configured": false, 00:20:28.492 "data_offset": 2048, 00:20:28.492 "data_size": 63488 00:20:28.492 }, 00:20:28.492 { 00:20:28.492 "name": "BaseBdev3", 00:20:28.492 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:28.492 "is_configured": true, 00:20:28.492 "data_offset": 2048, 00:20:28.492 "data_size": 63488 00:20:28.492 }, 00:20:28.492 { 00:20:28.492 "name": "BaseBdev4", 00:20:28.492 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:28.492 "is_configured": true, 00:20:28.492 "data_offset": 2048, 00:20:28.492 "data_size": 63488 00:20:28.492 } 00:20:28.492 ] 00:20:28.492 }' 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.492 09:20:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.060 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:29.060 09:20:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.060 09:20:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.060 [2024-10-15 09:20:12.705028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:29.060 [2024-10-15 09:20:12.705278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.060 [2024-10-15 09:20:12.705478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:20:29.060 [2024-10-15 09:20:12.705508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.060 [2024-10-15 09:20:12.706234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.060 [2024-10-15 09:20:12.706271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:29.060 [2024-10-15 09:20:12.706441] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:29.060 [2024-10-15 09:20:12.706464] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:29.061 [2024-10-15 09:20:12.706485] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:29.061 [2024-10-15 09:20:12.706529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:29.061 [2024-10-15 09:20:12.720484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:20:29.061 spare 00:20:29.061 09:20:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.061 09:20:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:29.061 [2024-10-15 09:20:12.723442] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:29.995 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:29.995 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.995 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:29.995 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:29.995 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.995 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.995 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.995 09:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.995 09:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.995 09:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.995 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.995 "name": "raid_bdev1", 00:20:29.995 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:29.995 "strip_size_kb": 0, 00:20:29.995 "state": "online", 00:20:29.995 "raid_level": "raid1", 00:20:29.995 "superblock": true, 00:20:29.995 "num_base_bdevs": 4, 00:20:29.995 "num_base_bdevs_discovered": 3, 00:20:29.995 "num_base_bdevs_operational": 3, 00:20:29.995 "process": { 00:20:29.995 "type": "rebuild", 00:20:29.995 "target": "spare", 00:20:29.995 "progress": { 00:20:29.995 "blocks": 20480, 00:20:29.995 "percent": 32 00:20:29.995 } 00:20:29.995 }, 00:20:29.995 "base_bdevs_list": [ 00:20:29.995 { 00:20:29.995 "name": "spare", 00:20:29.995 "uuid": "43555928-4b30-56b3-9624-52e1d016cc7e", 00:20:29.995 "is_configured": true, 00:20:29.995 "data_offset": 2048, 00:20:29.995 "data_size": 63488 00:20:29.996 }, 00:20:29.996 { 00:20:29.996 "name": null, 00:20:29.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.996 "is_configured": false, 00:20:29.996 "data_offset": 2048, 00:20:29.996 "data_size": 63488 00:20:29.996 }, 00:20:29.996 { 00:20:29.996 "name": "BaseBdev3", 00:20:29.996 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:29.996 "is_configured": true, 00:20:29.996 "data_offset": 2048, 00:20:29.996 "data_size": 63488 00:20:29.996 }, 00:20:29.996 { 00:20:29.996 "name": "BaseBdev4", 00:20:29.996 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:29.996 "is_configured": true, 00:20:29.996 "data_offset": 2048, 00:20:29.996 "data_size": 63488 00:20:29.996 } 00:20:29.996 ] 00:20:29.996 }' 00:20:29.996 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.996 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:29.996 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:29.996 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:29.996 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:29.996 09:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.996 09:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.996 [2024-10-15 09:20:13.893723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.255 [2024-10-15 09:20:13.935498] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:30.255 [2024-10-15 09:20:13.935812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.255 [2024-10-15 09:20:13.935846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.255 [2024-10-15 09:20:13.935864] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:30.255 09:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.255 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:30.255 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.255 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.255 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.255 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.255 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:30.255 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.255 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.255 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.255 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.255 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.255 09:20:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.255 09:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.255 09:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.255 09:20:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.255 09:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.255 "name": "raid_bdev1", 00:20:30.255 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:30.255 "strip_size_kb": 0, 00:20:30.255 "state": "online", 00:20:30.255 "raid_level": "raid1", 00:20:30.255 "superblock": true, 00:20:30.255 "num_base_bdevs": 4, 00:20:30.255 "num_base_bdevs_discovered": 2, 00:20:30.255 "num_base_bdevs_operational": 2, 00:20:30.255 "base_bdevs_list": [ 00:20:30.255 { 00:20:30.255 "name": null, 00:20:30.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.255 "is_configured": false, 00:20:30.255 "data_offset": 0, 00:20:30.255 "data_size": 63488 00:20:30.255 }, 00:20:30.255 { 00:20:30.255 "name": null, 00:20:30.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.255 "is_configured": false, 00:20:30.255 "data_offset": 2048, 00:20:30.255 "data_size": 63488 00:20:30.255 }, 00:20:30.255 { 00:20:30.255 "name": "BaseBdev3", 00:20:30.255 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:30.255 "is_configured": true, 00:20:30.255 "data_offset": 2048, 00:20:30.255 "data_size": 63488 00:20:30.255 }, 00:20:30.255 { 00:20:30.255 "name": "BaseBdev4", 00:20:30.255 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:30.255 "is_configured": true, 00:20:30.255 "data_offset": 2048, 00:20:30.255 "data_size": 63488 00:20:30.255 } 00:20:30.255 ] 00:20:30.255 }' 00:20:30.255 09:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.255 09:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.823 "name": "raid_bdev1", 00:20:30.823 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:30.823 "strip_size_kb": 0, 00:20:30.823 "state": "online", 00:20:30.823 "raid_level": "raid1", 00:20:30.823 "superblock": true, 00:20:30.823 "num_base_bdevs": 4, 00:20:30.823 "num_base_bdevs_discovered": 2, 00:20:30.823 "num_base_bdevs_operational": 2, 00:20:30.823 "base_bdevs_list": [ 00:20:30.823 { 00:20:30.823 "name": null, 00:20:30.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.823 "is_configured": false, 00:20:30.823 "data_offset": 0, 00:20:30.823 "data_size": 63488 00:20:30.823 }, 00:20:30.823 { 00:20:30.823 "name": null, 00:20:30.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.823 "is_configured": false, 00:20:30.823 "data_offset": 2048, 00:20:30.823 "data_size": 63488 00:20:30.823 }, 00:20:30.823 { 00:20:30.823 "name": "BaseBdev3", 00:20:30.823 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:30.823 "is_configured": true, 00:20:30.823 "data_offset": 2048, 00:20:30.823 "data_size": 63488 00:20:30.823 }, 00:20:30.823 { 00:20:30.823 "name": "BaseBdev4", 00:20:30.823 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:30.823 "is_configured": true, 00:20:30.823 "data_offset": 2048, 00:20:30.823 "data_size": 63488 00:20:30.823 } 00:20:30.823 ] 00:20:30.823 }' 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.823 [2024-10-15 09:20:14.653589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:30.823 [2024-10-15 09:20:14.653858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.823 [2024-10-15 09:20:14.653901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:20:30.823 [2024-10-15 09:20:14.653921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.823 [2024-10-15 09:20:14.654633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.823 [2024-10-15 09:20:14.654675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:30.823 [2024-10-15 09:20:14.654803] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:30.823 [2024-10-15 09:20:14.654831] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:30.823 [2024-10-15 09:20:14.654858] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:30.823 [2024-10-15 09:20:14.654908] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:30.823 BaseBdev1 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.823 09:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:31.759 09:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:31.759 09:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:31.759 09:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:31.759 09:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:31.759 09:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:31.759 09:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:31.759 09:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.759 09:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.759 09:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.759 09:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.759 09:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.759 09:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.759 09:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.759 09:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.759 09:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.018 09:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.018 "name": "raid_bdev1", 00:20:32.018 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:32.018 "strip_size_kb": 0, 00:20:32.018 "state": "online", 00:20:32.018 "raid_level": "raid1", 00:20:32.018 "superblock": true, 00:20:32.018 "num_base_bdevs": 4, 00:20:32.018 "num_base_bdevs_discovered": 2, 00:20:32.018 "num_base_bdevs_operational": 2, 00:20:32.018 "base_bdevs_list": [ 00:20:32.018 { 00:20:32.018 "name": null, 00:20:32.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.018 "is_configured": false, 00:20:32.018 "data_offset": 0, 00:20:32.018 "data_size": 63488 00:20:32.018 }, 00:20:32.018 { 00:20:32.018 "name": null, 00:20:32.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.018 "is_configured": false, 00:20:32.018 "data_offset": 2048, 00:20:32.018 "data_size": 63488 00:20:32.018 }, 00:20:32.018 { 00:20:32.018 "name": "BaseBdev3", 00:20:32.018 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:32.018 "is_configured": true, 00:20:32.018 "data_offset": 2048, 00:20:32.018 "data_size": 63488 00:20:32.018 }, 00:20:32.018 { 00:20:32.018 "name": "BaseBdev4", 00:20:32.018 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:32.018 "is_configured": true, 00:20:32.018 "data_offset": 2048, 00:20:32.018 "data_size": 63488 00:20:32.018 } 00:20:32.018 ] 00:20:32.018 }' 00:20:32.018 09:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.018 09:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.276 09:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:32.276 09:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.277 09:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:32.277 09:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:32.277 09:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.277 09:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.277 09:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.277 09:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.277 09:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.277 09:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.536 "name": "raid_bdev1", 00:20:32.536 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:32.536 "strip_size_kb": 0, 00:20:32.536 "state": "online", 00:20:32.536 "raid_level": "raid1", 00:20:32.536 "superblock": true, 00:20:32.536 "num_base_bdevs": 4, 00:20:32.536 "num_base_bdevs_discovered": 2, 00:20:32.536 "num_base_bdevs_operational": 2, 00:20:32.536 "base_bdevs_list": [ 00:20:32.536 { 00:20:32.536 "name": null, 00:20:32.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.536 "is_configured": false, 00:20:32.536 "data_offset": 0, 00:20:32.536 "data_size": 63488 00:20:32.536 }, 00:20:32.536 { 00:20:32.536 "name": null, 00:20:32.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.536 "is_configured": false, 00:20:32.536 "data_offset": 2048, 00:20:32.536 "data_size": 63488 00:20:32.536 }, 00:20:32.536 { 00:20:32.536 "name": "BaseBdev3", 00:20:32.536 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:32.536 "is_configured": true, 00:20:32.536 "data_offset": 2048, 00:20:32.536 "data_size": 63488 00:20:32.536 }, 00:20:32.536 { 00:20:32.536 "name": "BaseBdev4", 00:20:32.536 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:32.536 "is_configured": true, 00:20:32.536 "data_offset": 2048, 00:20:32.536 "data_size": 63488 00:20:32.536 } 00:20:32.536 ] 00:20:32.536 }' 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.536 [2024-10-15 09:20:16.346077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:32.536 [2024-10-15 09:20:16.346441] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:32.536 [2024-10-15 09:20:16.346465] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:32.536 request: 00:20:32.536 { 00:20:32.536 "base_bdev": "BaseBdev1", 00:20:32.536 "raid_bdev": "raid_bdev1", 00:20:32.536 "method": "bdev_raid_add_base_bdev", 00:20:32.536 "req_id": 1 00:20:32.536 } 00:20:32.536 Got JSON-RPC error response 00:20:32.536 response: 00:20:32.536 { 00:20:32.536 "code": -22, 00:20:32.536 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:32.536 } 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:32.536 09:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:33.473 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:33.473 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:33.473 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:33.473 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:33.473 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:33.473 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:33.474 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.474 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.474 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.474 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.474 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.474 09:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.474 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.474 09:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.474 09:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.733 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.733 "name": "raid_bdev1", 00:20:33.733 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:33.733 "strip_size_kb": 0, 00:20:33.733 "state": "online", 00:20:33.733 "raid_level": "raid1", 00:20:33.733 "superblock": true, 00:20:33.733 "num_base_bdevs": 4, 00:20:33.733 "num_base_bdevs_discovered": 2, 00:20:33.733 "num_base_bdevs_operational": 2, 00:20:33.733 "base_bdevs_list": [ 00:20:33.733 { 00:20:33.733 "name": null, 00:20:33.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.733 "is_configured": false, 00:20:33.733 "data_offset": 0, 00:20:33.733 "data_size": 63488 00:20:33.733 }, 00:20:33.733 { 00:20:33.733 "name": null, 00:20:33.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.733 "is_configured": false, 00:20:33.733 "data_offset": 2048, 00:20:33.733 "data_size": 63488 00:20:33.733 }, 00:20:33.733 { 00:20:33.733 "name": "BaseBdev3", 00:20:33.733 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:33.733 "is_configured": true, 00:20:33.733 "data_offset": 2048, 00:20:33.733 "data_size": 63488 00:20:33.733 }, 00:20:33.733 { 00:20:33.733 "name": "BaseBdev4", 00:20:33.733 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:33.733 "is_configured": true, 00:20:33.733 "data_offset": 2048, 00:20:33.733 "data_size": 63488 00:20:33.733 } 00:20:33.733 ] 00:20:33.733 }' 00:20:33.733 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.733 09:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.992 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:33.992 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:33.992 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:33.992 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:33.992 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:33.992 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.992 09:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.992 09:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.992 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.251 09:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.251 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:34.251 "name": "raid_bdev1", 00:20:34.251 "uuid": "90da8cb8-acff-478d-84e6-f1451f9e4596", 00:20:34.251 "strip_size_kb": 0, 00:20:34.251 "state": "online", 00:20:34.251 "raid_level": "raid1", 00:20:34.251 "superblock": true, 00:20:34.251 "num_base_bdevs": 4, 00:20:34.251 "num_base_bdevs_discovered": 2, 00:20:34.251 "num_base_bdevs_operational": 2, 00:20:34.251 "base_bdevs_list": [ 00:20:34.251 { 00:20:34.251 "name": null, 00:20:34.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.251 "is_configured": false, 00:20:34.251 "data_offset": 0, 00:20:34.251 "data_size": 63488 00:20:34.251 }, 00:20:34.251 { 00:20:34.251 "name": null, 00:20:34.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.251 "is_configured": false, 00:20:34.251 "data_offset": 2048, 00:20:34.251 "data_size": 63488 00:20:34.251 }, 00:20:34.251 { 00:20:34.251 "name": "BaseBdev3", 00:20:34.251 "uuid": "a92d8b1d-b696-5d02-bc84-689d42e98608", 00:20:34.251 "is_configured": true, 00:20:34.251 "data_offset": 2048, 00:20:34.251 "data_size": 63488 00:20:34.251 }, 00:20:34.251 { 00:20:34.251 "name": "BaseBdev4", 00:20:34.251 "uuid": "2cb8efe8-81b9-529b-b9f0-08de55858516", 00:20:34.251 "is_configured": true, 00:20:34.251 "data_offset": 2048, 00:20:34.251 "data_size": 63488 00:20:34.251 } 00:20:34.251 ] 00:20:34.251 }' 00:20:34.251 09:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:34.251 09:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:34.251 09:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:34.251 09:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:34.251 09:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78558 00:20:34.251 09:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 78558 ']' 00:20:34.251 09:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 78558 00:20:34.251 09:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:20:34.251 09:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:34.251 09:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78558 00:20:34.251 killing process with pid 78558 00:20:34.251 Received shutdown signal, test time was about 60.000000 seconds 00:20:34.251 00:20:34.251 Latency(us) 00:20:34.251 [2024-10-15T09:20:18.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.251 [2024-10-15T09:20:18.179Z] =================================================================================================================== 00:20:34.251 [2024-10-15T09:20:18.179Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:34.251 09:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:34.251 09:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:34.251 09:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78558' 00:20:34.251 09:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 78558 00:20:34.251 [2024-10-15 09:20:18.105673] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:34.251 09:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 78558 00:20:34.251 [2024-10-15 09:20:18.105844] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:34.251 [2024-10-15 09:20:18.105946] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:34.251 [2024-10-15 09:20:18.105963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:34.817 [2024-10-15 09:20:18.576089] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:36.213 09:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:36.213 00:20:36.213 real 0m29.800s 00:20:36.213 user 0m35.971s 00:20:36.213 sys 0m4.169s 00:20:36.213 09:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:36.213 09:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.213 ************************************ 00:20:36.213 END TEST raid_rebuild_test_sb 00:20:36.214 ************************************ 00:20:36.214 09:20:19 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:20:36.214 09:20:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:36.214 09:20:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:36.214 09:20:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:36.214 ************************************ 00:20:36.214 START TEST raid_rebuild_test_io 00:20:36.214 ************************************ 00:20:36.214 09:20:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:20:36.214 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:36.214 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:36.214 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:36.214 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:20:36.214 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:36.214 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:36.214 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:36.214 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:36.214 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:36.214 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:36.214 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:36.214 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:36.214 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:36.214 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:36.214 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:36.214 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:36.214 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79355 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79355 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 79355 ']' 00:20:36.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:36.215 09:20:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:36.215 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:36.215 Zero copy mechanism will not be used. 00:20:36.215 [2024-10-15 09:20:19.891612] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:20:36.215 [2024-10-15 09:20:19.891813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79355 ] 00:20:36.215 [2024-10-15 09:20:20.063961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.474 [2024-10-15 09:20:20.212087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.732 [2024-10-15 09:20:20.436778] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:36.732 [2024-10-15 09:20:20.436893] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:36.991 09:20:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:36.991 09:20:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:20:36.991 09:20:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:36.991 09:20:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:36.991 09:20:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.991 09:20:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:36.991 BaseBdev1_malloc 00:20:36.991 09:20:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.991 09:20:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:36.991 09:20:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.991 09:20:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:36.991 [2024-10-15 09:20:20.894164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:36.991 [2024-10-15 09:20:20.894255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.991 [2024-10-15 09:20:20.894307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:36.991 [2024-10-15 09:20:20.894330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.991 [2024-10-15 09:20:20.897430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.991 [2024-10-15 09:20:20.897484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:36.991 BaseBdev1 00:20:36.991 09:20:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.991 09:20:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:36.991 09:20:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:36.991 09:20:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.991 09:20:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.250 BaseBdev2_malloc 00:20:37.250 09:20:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.250 09:20:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:37.250 09:20:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.250 09:20:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.251 [2024-10-15 09:20:20.954971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:37.251 [2024-10-15 09:20:20.955069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.251 [2024-10-15 09:20:20.955100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:37.251 [2024-10-15 09:20:20.955118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.251 [2024-10-15 09:20:20.958220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.251 [2024-10-15 09:20:20.958266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:37.251 BaseBdev2 00:20:37.251 09:20:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.251 09:20:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:37.251 09:20:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:37.251 09:20:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.251 09:20:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.251 BaseBdev3_malloc 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.251 [2024-10-15 09:20:21.028143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:37.251 [2024-10-15 09:20:21.028246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.251 [2024-10-15 09:20:21.028282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:37.251 [2024-10-15 09:20:21.028301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.251 [2024-10-15 09:20:21.031365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.251 [2024-10-15 09:20:21.031416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:37.251 BaseBdev3 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.251 BaseBdev4_malloc 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.251 [2024-10-15 09:20:21.085061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:37.251 [2024-10-15 09:20:21.085222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.251 [2024-10-15 09:20:21.085269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:37.251 [2024-10-15 09:20:21.085290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.251 [2024-10-15 09:20:21.088288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.251 [2024-10-15 09:20:21.088502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:37.251 BaseBdev4 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.251 spare_malloc 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.251 spare_delay 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.251 [2024-10-15 09:20:21.151718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:37.251 [2024-10-15 09:20:21.151813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.251 [2024-10-15 09:20:21.151843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:37.251 [2024-10-15 09:20:21.151860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.251 [2024-10-15 09:20:21.154966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.251 [2024-10-15 09:20:21.155050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:37.251 spare 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.251 [2024-10-15 09:20:21.159949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:37.251 [2024-10-15 09:20:21.162565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:37.251 [2024-10-15 09:20:21.162666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:37.251 [2024-10-15 09:20:21.162751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:37.251 [2024-10-15 09:20:21.162873] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:37.251 [2024-10-15 09:20:21.162894] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:37.251 [2024-10-15 09:20:21.163252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:37.251 [2024-10-15 09:20:21.163492] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:37.251 [2024-10-15 09:20:21.163513] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:37.251 [2024-10-15 09:20:21.163708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.251 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.510 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.510 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.510 "name": "raid_bdev1", 00:20:37.510 "uuid": "9da71b9b-44bd-4889-a43f-c57400993c58", 00:20:37.510 "strip_size_kb": 0, 00:20:37.510 "state": "online", 00:20:37.510 "raid_level": "raid1", 00:20:37.510 "superblock": false, 00:20:37.510 "num_base_bdevs": 4, 00:20:37.510 "num_base_bdevs_discovered": 4, 00:20:37.510 "num_base_bdevs_operational": 4, 00:20:37.510 "base_bdevs_list": [ 00:20:37.510 { 00:20:37.510 "name": "BaseBdev1", 00:20:37.510 "uuid": "b61d0ea2-9842-5f2d-b49c-9bc3b314f41c", 00:20:37.510 "is_configured": true, 00:20:37.510 "data_offset": 0, 00:20:37.510 "data_size": 65536 00:20:37.510 }, 00:20:37.510 { 00:20:37.510 "name": "BaseBdev2", 00:20:37.510 "uuid": "b9fdbe2e-4837-54ab-9ee1-a637a0e82d9c", 00:20:37.510 "is_configured": true, 00:20:37.510 "data_offset": 0, 00:20:37.510 "data_size": 65536 00:20:37.510 }, 00:20:37.510 { 00:20:37.510 "name": "BaseBdev3", 00:20:37.510 "uuid": "1b17ddd3-6944-513c-a45c-c606ec59bfe4", 00:20:37.510 "is_configured": true, 00:20:37.510 "data_offset": 0, 00:20:37.510 "data_size": 65536 00:20:37.510 }, 00:20:37.510 { 00:20:37.510 "name": "BaseBdev4", 00:20:37.510 "uuid": "abdf4390-edc1-507d-8a95-bc8684aa83e9", 00:20:37.510 "is_configured": true, 00:20:37.510 "data_offset": 0, 00:20:37.510 "data_size": 65536 00:20:37.510 } 00:20:37.510 ] 00:20:37.510 }' 00:20:37.510 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.510 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.768 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:37.768 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.768 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:37.768 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.768 [2024-10-15 09:20:21.688589] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:38.027 [2024-10-15 09:20:21.800110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:38.027 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.028 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.028 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.028 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.028 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.028 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.028 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.028 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:38.028 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.028 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.028 "name": "raid_bdev1", 00:20:38.028 "uuid": "9da71b9b-44bd-4889-a43f-c57400993c58", 00:20:38.028 "strip_size_kb": 0, 00:20:38.028 "state": "online", 00:20:38.028 "raid_level": "raid1", 00:20:38.028 "superblock": false, 00:20:38.028 "num_base_bdevs": 4, 00:20:38.028 "num_base_bdevs_discovered": 3, 00:20:38.028 "num_base_bdevs_operational": 3, 00:20:38.028 "base_bdevs_list": [ 00:20:38.028 { 00:20:38.028 "name": null, 00:20:38.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.028 "is_configured": false, 00:20:38.028 "data_offset": 0, 00:20:38.028 "data_size": 65536 00:20:38.028 }, 00:20:38.028 { 00:20:38.028 "name": "BaseBdev2", 00:20:38.028 "uuid": "b9fdbe2e-4837-54ab-9ee1-a637a0e82d9c", 00:20:38.028 "is_configured": true, 00:20:38.028 "data_offset": 0, 00:20:38.028 "data_size": 65536 00:20:38.028 }, 00:20:38.028 { 00:20:38.028 "name": "BaseBdev3", 00:20:38.028 "uuid": "1b17ddd3-6944-513c-a45c-c606ec59bfe4", 00:20:38.028 "is_configured": true, 00:20:38.028 "data_offset": 0, 00:20:38.028 "data_size": 65536 00:20:38.028 }, 00:20:38.028 { 00:20:38.028 "name": "BaseBdev4", 00:20:38.028 "uuid": "abdf4390-edc1-507d-8a95-bc8684aa83e9", 00:20:38.028 "is_configured": true, 00:20:38.028 "data_offset": 0, 00:20:38.028 "data_size": 65536 00:20:38.028 } 00:20:38.028 ] 00:20:38.028 }' 00:20:38.028 09:20:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.028 09:20:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:38.028 [2024-10-15 09:20:21.933204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:38.028 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:38.028 Zero copy mechanism will not be used. 00:20:38.028 Running I/O for 60 seconds... 00:20:38.595 09:20:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:38.595 09:20:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.595 09:20:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:38.595 [2024-10-15 09:20:22.349758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:38.595 09:20:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.595 09:20:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:38.595 [2024-10-15 09:20:22.431656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:20:38.595 [2024-10-15 09:20:22.434813] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:38.854 [2024-10-15 09:20:22.557813] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:38.854 [2024-10-15 09:20:22.559058] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:38.854 [2024-10-15 09:20:22.717371] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:39.406 147.00 IOPS, 441.00 MiB/s [2024-10-15T09:20:23.334Z] [2024-10-15 09:20:23.111730] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:39.406 [2024-10-15 09:20:23.112793] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:39.406 [2024-10-15 09:20:23.329521] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:39.665 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:39.665 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.665 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:39.665 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:39.665 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.665 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.665 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.665 09:20:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.665 09:20:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:39.665 09:20:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.665 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.665 "name": "raid_bdev1", 00:20:39.665 "uuid": "9da71b9b-44bd-4889-a43f-c57400993c58", 00:20:39.665 "strip_size_kb": 0, 00:20:39.665 "state": "online", 00:20:39.665 "raid_level": "raid1", 00:20:39.665 "superblock": false, 00:20:39.665 "num_base_bdevs": 4, 00:20:39.665 "num_base_bdevs_discovered": 4, 00:20:39.665 "num_base_bdevs_operational": 4, 00:20:39.665 "process": { 00:20:39.665 "type": "rebuild", 00:20:39.665 "target": "spare", 00:20:39.665 "progress": { 00:20:39.665 "blocks": 10240, 00:20:39.665 "percent": 15 00:20:39.665 } 00:20:39.665 }, 00:20:39.665 "base_bdevs_list": [ 00:20:39.665 { 00:20:39.665 "name": "spare", 00:20:39.665 "uuid": "3bc18baf-d1dc-5b76-8565-8f5e9fabbb2d", 00:20:39.665 "is_configured": true, 00:20:39.665 "data_offset": 0, 00:20:39.665 "data_size": 65536 00:20:39.665 }, 00:20:39.665 { 00:20:39.665 "name": "BaseBdev2", 00:20:39.665 "uuid": "b9fdbe2e-4837-54ab-9ee1-a637a0e82d9c", 00:20:39.665 "is_configured": true, 00:20:39.665 "data_offset": 0, 00:20:39.665 "data_size": 65536 00:20:39.665 }, 00:20:39.665 { 00:20:39.665 "name": "BaseBdev3", 00:20:39.665 "uuid": "1b17ddd3-6944-513c-a45c-c606ec59bfe4", 00:20:39.665 "is_configured": true, 00:20:39.665 "data_offset": 0, 00:20:39.665 "data_size": 65536 00:20:39.665 }, 00:20:39.665 { 00:20:39.665 "name": "BaseBdev4", 00:20:39.665 "uuid": "abdf4390-edc1-507d-8a95-bc8684aa83e9", 00:20:39.665 "is_configured": true, 00:20:39.665 "data_offset": 0, 00:20:39.665 "data_size": 65536 00:20:39.665 } 00:20:39.665 ] 00:20:39.665 }' 00:20:39.665 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.665 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:39.665 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:39.665 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:39.665 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:39.665 09:20:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.665 09:20:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:39.665 [2024-10-15 09:20:23.590665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:39.924 [2024-10-15 09:20:23.690401] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:39.924 [2024-10-15 09:20:23.717751] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:39.924 [2024-10-15 09:20:23.732852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.924 [2024-10-15 09:20:23.732948] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:39.924 [2024-10-15 09:20:23.732969] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:39.924 [2024-10-15 09:20:23.765568] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:20:39.924 09:20:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.924 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:39.924 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:39.924 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:39.924 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:39.924 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:39.924 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:39.924 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.924 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.924 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.924 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.925 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.925 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.925 09:20:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.925 09:20:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:39.925 09:20:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.184 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.184 "name": "raid_bdev1", 00:20:40.184 "uuid": "9da71b9b-44bd-4889-a43f-c57400993c58", 00:20:40.184 "strip_size_kb": 0, 00:20:40.184 "state": "online", 00:20:40.184 "raid_level": "raid1", 00:20:40.184 "superblock": false, 00:20:40.184 "num_base_bdevs": 4, 00:20:40.184 "num_base_bdevs_discovered": 3, 00:20:40.184 "num_base_bdevs_operational": 3, 00:20:40.184 "base_bdevs_list": [ 00:20:40.184 { 00:20:40.184 "name": null, 00:20:40.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.184 "is_configured": false, 00:20:40.184 "data_offset": 0, 00:20:40.184 "data_size": 65536 00:20:40.184 }, 00:20:40.184 { 00:20:40.184 "name": "BaseBdev2", 00:20:40.184 "uuid": "b9fdbe2e-4837-54ab-9ee1-a637a0e82d9c", 00:20:40.184 "is_configured": true, 00:20:40.184 "data_offset": 0, 00:20:40.184 "data_size": 65536 00:20:40.184 }, 00:20:40.184 { 00:20:40.184 "name": "BaseBdev3", 00:20:40.184 "uuid": "1b17ddd3-6944-513c-a45c-c606ec59bfe4", 00:20:40.184 "is_configured": true, 00:20:40.184 "data_offset": 0, 00:20:40.184 "data_size": 65536 00:20:40.184 }, 00:20:40.184 { 00:20:40.184 "name": "BaseBdev4", 00:20:40.184 "uuid": "abdf4390-edc1-507d-8a95-bc8684aa83e9", 00:20:40.184 "is_configured": true, 00:20:40.184 "data_offset": 0, 00:20:40.184 "data_size": 65536 00:20:40.184 } 00:20:40.184 ] 00:20:40.184 }' 00:20:40.184 09:20:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.184 09:20:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:40.443 124.50 IOPS, 373.50 MiB/s [2024-10-15T09:20:24.371Z] 09:20:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:40.443 09:20:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:40.443 09:20:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:40.443 09:20:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:40.443 09:20:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:40.443 09:20:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.443 09:20:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.443 09:20:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.443 09:20:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:40.443 09:20:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.443 09:20:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:40.443 "name": "raid_bdev1", 00:20:40.443 "uuid": "9da71b9b-44bd-4889-a43f-c57400993c58", 00:20:40.443 "strip_size_kb": 0, 00:20:40.443 "state": "online", 00:20:40.443 "raid_level": "raid1", 00:20:40.443 "superblock": false, 00:20:40.443 "num_base_bdevs": 4, 00:20:40.443 "num_base_bdevs_discovered": 3, 00:20:40.443 "num_base_bdevs_operational": 3, 00:20:40.443 "base_bdevs_list": [ 00:20:40.443 { 00:20:40.443 "name": null, 00:20:40.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.443 "is_configured": false, 00:20:40.443 "data_offset": 0, 00:20:40.443 "data_size": 65536 00:20:40.443 }, 00:20:40.443 { 00:20:40.443 "name": "BaseBdev2", 00:20:40.443 "uuid": "b9fdbe2e-4837-54ab-9ee1-a637a0e82d9c", 00:20:40.443 "is_configured": true, 00:20:40.443 "data_offset": 0, 00:20:40.443 "data_size": 65536 00:20:40.443 }, 00:20:40.443 { 00:20:40.443 "name": "BaseBdev3", 00:20:40.443 "uuid": "1b17ddd3-6944-513c-a45c-c606ec59bfe4", 00:20:40.443 "is_configured": true, 00:20:40.443 "data_offset": 0, 00:20:40.443 "data_size": 65536 00:20:40.443 }, 00:20:40.443 { 00:20:40.443 "name": "BaseBdev4", 00:20:40.443 "uuid": "abdf4390-edc1-507d-8a95-bc8684aa83e9", 00:20:40.443 "is_configured": true, 00:20:40.444 "data_offset": 0, 00:20:40.444 "data_size": 65536 00:20:40.444 } 00:20:40.444 ] 00:20:40.444 }' 00:20:40.444 09:20:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:40.702 09:20:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:40.702 09:20:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:40.702 09:20:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:40.702 09:20:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:40.702 09:20:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.702 09:20:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:40.702 [2024-10-15 09:20:24.466966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:40.702 09:20:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.702 09:20:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:40.702 [2024-10-15 09:20:24.568980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:40.702 [2024-10-15 09:20:24.571998] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:40.962 [2024-10-15 09:20:24.704605] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:40.962 [2024-10-15 09:20:24.706841] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:41.221 131.67 IOPS, 395.00 MiB/s [2024-10-15T09:20:25.149Z] [2024-10-15 09:20:24.950032] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:41.221 [2024-10-15 09:20:24.951432] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:41.479 [2024-10-15 09:20:25.351364] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:41.479 [2024-10-15 09:20:25.352084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:41.738 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:41.738 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.738 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:41.738 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:41.738 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.738 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.738 09:20:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.738 09:20:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:41.738 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.738 09:20:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.738 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.738 "name": "raid_bdev1", 00:20:41.738 "uuid": "9da71b9b-44bd-4889-a43f-c57400993c58", 00:20:41.738 "strip_size_kb": 0, 00:20:41.738 "state": "online", 00:20:41.738 "raid_level": "raid1", 00:20:41.738 "superblock": false, 00:20:41.738 "num_base_bdevs": 4, 00:20:41.738 "num_base_bdevs_discovered": 4, 00:20:41.738 "num_base_bdevs_operational": 4, 00:20:41.738 "process": { 00:20:41.738 "type": "rebuild", 00:20:41.738 "target": "spare", 00:20:41.738 "progress": { 00:20:41.738 "blocks": 8192, 00:20:41.738 "percent": 12 00:20:41.738 } 00:20:41.738 }, 00:20:41.738 "base_bdevs_list": [ 00:20:41.738 { 00:20:41.738 "name": "spare", 00:20:41.738 "uuid": "3bc18baf-d1dc-5b76-8565-8f5e9fabbb2d", 00:20:41.738 "is_configured": true, 00:20:41.738 "data_offset": 0, 00:20:41.738 "data_size": 65536 00:20:41.738 }, 00:20:41.738 { 00:20:41.738 "name": "BaseBdev2", 00:20:41.738 "uuid": "b9fdbe2e-4837-54ab-9ee1-a637a0e82d9c", 00:20:41.738 "is_configured": true, 00:20:41.738 "data_offset": 0, 00:20:41.738 "data_size": 65536 00:20:41.738 }, 00:20:41.738 { 00:20:41.738 "name": "BaseBdev3", 00:20:41.738 "uuid": "1b17ddd3-6944-513c-a45c-c606ec59bfe4", 00:20:41.738 "is_configured": true, 00:20:41.738 "data_offset": 0, 00:20:41.738 "data_size": 65536 00:20:41.738 }, 00:20:41.738 { 00:20:41.738 "name": "BaseBdev4", 00:20:41.738 "uuid": "abdf4390-edc1-507d-8a95-bc8684aa83e9", 00:20:41.738 "is_configured": true, 00:20:41.738 "data_offset": 0, 00:20:41.738 "data_size": 65536 00:20:41.738 } 00:20:41.738 ] 00:20:41.738 }' 00:20:41.738 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:41.738 [2024-10-15 09:20:25.587886] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:41.738 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:41.738 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:41.996 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:41.996 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:41.996 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:41.996 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:41.996 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:20:41.996 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:41.996 09:20:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.996 09:20:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:41.996 [2024-10-15 09:20:25.693021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:41.996 [2024-10-15 09:20:25.814988] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:41.996 [2024-10-15 09:20:25.815895] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:41.996 [2024-10-15 09:20:25.825529] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:20:41.996 [2024-10-15 09:20:25.825571] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:20:41.996 [2024-10-15 09:20:25.847365] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:41.996 09:20:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.996 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:20:41.996 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:20:41.996 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:41.996 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.996 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:41.997 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:41.997 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.997 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.997 09:20:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.997 09:20:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:41.997 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.997 09:20:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.997 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.997 "name": "raid_bdev1", 00:20:41.997 "uuid": "9da71b9b-44bd-4889-a43f-c57400993c58", 00:20:41.997 "strip_size_kb": 0, 00:20:41.997 "state": "online", 00:20:41.997 "raid_level": "raid1", 00:20:41.997 "superblock": false, 00:20:41.997 "num_base_bdevs": 4, 00:20:41.997 "num_base_bdevs_discovered": 3, 00:20:41.997 "num_base_bdevs_operational": 3, 00:20:41.997 "process": { 00:20:41.997 "type": "rebuild", 00:20:41.997 "target": "spare", 00:20:41.997 "progress": { 00:20:41.997 "blocks": 14336, 00:20:41.997 "percent": 21 00:20:41.997 } 00:20:41.997 }, 00:20:41.997 "base_bdevs_list": [ 00:20:41.997 { 00:20:41.997 "name": "spare", 00:20:41.997 "uuid": "3bc18baf-d1dc-5b76-8565-8f5e9fabbb2d", 00:20:41.997 "is_configured": true, 00:20:41.997 "data_offset": 0, 00:20:41.997 "data_size": 65536 00:20:41.997 }, 00:20:41.997 { 00:20:41.997 "name": null, 00:20:41.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.997 "is_configured": false, 00:20:41.997 "data_offset": 0, 00:20:41.997 "data_size": 65536 00:20:41.997 }, 00:20:41.997 { 00:20:41.997 "name": "BaseBdev3", 00:20:41.997 "uuid": "1b17ddd3-6944-513c-a45c-c606ec59bfe4", 00:20:41.997 "is_configured": true, 00:20:41.997 "data_offset": 0, 00:20:41.997 "data_size": 65536 00:20:41.997 }, 00:20:41.997 { 00:20:41.997 "name": "BaseBdev4", 00:20:41.997 "uuid": "abdf4390-edc1-507d-8a95-bc8684aa83e9", 00:20:41.997 "is_configured": true, 00:20:41.997 "data_offset": 0, 00:20:41.997 "data_size": 65536 00:20:41.997 } 00:20:41.997 ] 00:20:41.997 }' 00:20:42.254 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:42.254 112.25 IOPS, 336.75 MiB/s [2024-10-15T09:20:26.182Z] 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:42.254 09:20:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:42.254 09:20:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:42.254 09:20:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=537 00:20:42.254 09:20:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:42.255 09:20:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:42.255 09:20:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:42.255 09:20:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:42.255 09:20:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:42.255 09:20:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:42.255 09:20:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.255 09:20:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.255 09:20:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.255 09:20:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:42.255 09:20:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.255 [2024-10-15 09:20:26.080676] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:42.255 09:20:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:42.255 "name": "raid_bdev1", 00:20:42.255 "uuid": "9da71b9b-44bd-4889-a43f-c57400993c58", 00:20:42.255 "strip_size_kb": 0, 00:20:42.255 "state": "online", 00:20:42.255 "raid_level": "raid1", 00:20:42.255 "superblock": false, 00:20:42.255 "num_base_bdevs": 4, 00:20:42.255 "num_base_bdevs_discovered": 3, 00:20:42.255 "num_base_bdevs_operational": 3, 00:20:42.255 "process": { 00:20:42.255 "type": "rebuild", 00:20:42.255 "target": "spare", 00:20:42.255 "progress": { 00:20:42.255 "blocks": 14336, 00:20:42.255 "percent": 21 00:20:42.255 } 00:20:42.255 }, 00:20:42.255 "base_bdevs_list": [ 00:20:42.255 { 00:20:42.255 "name": "spare", 00:20:42.255 "uuid": "3bc18baf-d1dc-5b76-8565-8f5e9fabbb2d", 00:20:42.255 "is_configured": true, 00:20:42.255 "data_offset": 0, 00:20:42.255 "data_size": 65536 00:20:42.255 }, 00:20:42.255 { 00:20:42.255 "name": null, 00:20:42.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.255 "is_configured": false, 00:20:42.255 "data_offset": 0, 00:20:42.255 "data_size": 65536 00:20:42.255 }, 00:20:42.255 { 00:20:42.255 "name": "BaseBdev3", 00:20:42.255 "uuid": "1b17ddd3-6944-513c-a45c-c606ec59bfe4", 00:20:42.255 "is_configured": true, 00:20:42.255 "data_offset": 0, 00:20:42.255 "data_size": 65536 00:20:42.255 }, 00:20:42.255 { 00:20:42.255 "name": "BaseBdev4", 00:20:42.255 "uuid": "abdf4390-edc1-507d-8a95-bc8684aa83e9", 00:20:42.255 "is_configured": true, 00:20:42.255 "data_offset": 0, 00:20:42.255 "data_size": 65536 00:20:42.255 } 00:20:42.255 ] 00:20:42.255 }' 00:20:42.255 09:20:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:42.255 09:20:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:42.255 09:20:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:42.512 09:20:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:42.513 09:20:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:42.771 [2024-10-15 09:20:26.455462] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:42.771 [2024-10-15 09:20:26.695822] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:42.771 [2024-10-15 09:20:26.697581] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:43.030 103.60 IOPS, 310.80 MiB/s [2024-10-15T09:20:26.958Z] [2024-10-15 09:20:26.948598] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:43.290 09:20:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:43.290 09:20:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:43.290 09:20:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:43.290 09:20:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:43.290 09:20:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:43.290 09:20:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:43.290 09:20:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.290 09:20:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.290 09:20:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.290 09:20:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.548 09:20:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.548 09:20:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:43.548 "name": "raid_bdev1", 00:20:43.548 "uuid": "9da71b9b-44bd-4889-a43f-c57400993c58", 00:20:43.548 "strip_size_kb": 0, 00:20:43.548 "state": "online", 00:20:43.549 "raid_level": "raid1", 00:20:43.549 "superblock": false, 00:20:43.549 "num_base_bdevs": 4, 00:20:43.549 "num_base_bdevs_discovered": 3, 00:20:43.549 "num_base_bdevs_operational": 3, 00:20:43.549 "process": { 00:20:43.549 "type": "rebuild", 00:20:43.549 "target": "spare", 00:20:43.549 "progress": { 00:20:43.549 "blocks": 32768, 00:20:43.549 "percent": 50 00:20:43.549 } 00:20:43.549 }, 00:20:43.549 "base_bdevs_list": [ 00:20:43.549 { 00:20:43.549 "name": "spare", 00:20:43.549 "uuid": "3bc18baf-d1dc-5b76-8565-8f5e9fabbb2d", 00:20:43.549 "is_configured": true, 00:20:43.549 "data_offset": 0, 00:20:43.549 "data_size": 65536 00:20:43.549 }, 00:20:43.549 { 00:20:43.549 "name": null, 00:20:43.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.549 "is_configured": false, 00:20:43.549 "data_offset": 0, 00:20:43.549 "data_size": 65536 00:20:43.549 }, 00:20:43.549 { 00:20:43.549 "name": "BaseBdev3", 00:20:43.549 "uuid": "1b17ddd3-6944-513c-a45c-c606ec59bfe4", 00:20:43.549 "is_configured": true, 00:20:43.549 "data_offset": 0, 00:20:43.549 "data_size": 65536 00:20:43.549 }, 00:20:43.549 { 00:20:43.549 "name": "BaseBdev4", 00:20:43.549 "uuid": "abdf4390-edc1-507d-8a95-bc8684aa83e9", 00:20:43.549 "is_configured": true, 00:20:43.549 "data_offset": 0, 00:20:43.549 "data_size": 65536 00:20:43.549 } 00:20:43.549 ] 00:20:43.549 }' 00:20:43.549 09:20:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:43.549 09:20:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:43.549 09:20:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:43.549 09:20:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:43.549 09:20:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:43.807 [2024-10-15 09:20:27.677533] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:44.323 92.67 IOPS, 278.00 MiB/s [2024-10-15T09:20:28.251Z] [2024-10-15 09:20:28.122790] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:20:44.582 09:20:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:44.582 09:20:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:44.582 09:20:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:44.582 09:20:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:44.582 09:20:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:44.582 09:20:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:44.582 09:20:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.582 09:20:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.582 09:20:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:44.582 09:20:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.582 [2024-10-15 09:20:28.372242] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:44.582 09:20:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.582 09:20:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:44.582 "name": "raid_bdev1", 00:20:44.582 "uuid": "9da71b9b-44bd-4889-a43f-c57400993c58", 00:20:44.582 "strip_size_kb": 0, 00:20:44.582 "state": "online", 00:20:44.582 "raid_level": "raid1", 00:20:44.582 "superblock": false, 00:20:44.582 "num_base_bdevs": 4, 00:20:44.582 "num_base_bdevs_discovered": 3, 00:20:44.582 "num_base_bdevs_operational": 3, 00:20:44.582 "process": { 00:20:44.582 "type": "rebuild", 00:20:44.582 "target": "spare", 00:20:44.582 "progress": { 00:20:44.582 "blocks": 51200, 00:20:44.582 "percent": 78 00:20:44.582 } 00:20:44.582 }, 00:20:44.582 "base_bdevs_list": [ 00:20:44.582 { 00:20:44.582 "name": "spare", 00:20:44.582 "uuid": "3bc18baf-d1dc-5b76-8565-8f5e9fabbb2d", 00:20:44.582 "is_configured": true, 00:20:44.582 "data_offset": 0, 00:20:44.582 "data_size": 65536 00:20:44.582 }, 00:20:44.582 { 00:20:44.582 "name": null, 00:20:44.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.582 "is_configured": false, 00:20:44.582 "data_offset": 0, 00:20:44.582 "data_size": 65536 00:20:44.582 }, 00:20:44.582 { 00:20:44.582 "name": "BaseBdev3", 00:20:44.582 "uuid": "1b17ddd3-6944-513c-a45c-c606ec59bfe4", 00:20:44.582 "is_configured": true, 00:20:44.582 "data_offset": 0, 00:20:44.582 "data_size": 65536 00:20:44.582 }, 00:20:44.582 { 00:20:44.582 "name": "BaseBdev4", 00:20:44.582 "uuid": "abdf4390-edc1-507d-8a95-bc8684aa83e9", 00:20:44.582 "is_configured": true, 00:20:44.582 "data_offset": 0, 00:20:44.582 "data_size": 65536 00:20:44.582 } 00:20:44.582 ] 00:20:44.582 }' 00:20:44.582 09:20:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:44.582 09:20:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:44.582 09:20:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:44.582 [2024-10-15 09:20:28.485510] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:20:44.842 09:20:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:44.842 09:20:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:45.359 85.86 IOPS, 257.57 MiB/s [2024-10-15T09:20:29.287Z] [2024-10-15 09:20:29.152115] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:45.359 [2024-10-15 09:20:29.248601] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:45.359 [2024-10-15 09:20:29.261516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.618 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:45.618 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:45.618 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:45.618 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:45.618 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:45.618 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:45.618 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.618 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.618 09:20:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.618 09:20:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:45.877 "name": "raid_bdev1", 00:20:45.877 "uuid": "9da71b9b-44bd-4889-a43f-c57400993c58", 00:20:45.877 "strip_size_kb": 0, 00:20:45.877 "state": "online", 00:20:45.877 "raid_level": "raid1", 00:20:45.877 "superblock": false, 00:20:45.877 "num_base_bdevs": 4, 00:20:45.877 "num_base_bdevs_discovered": 3, 00:20:45.877 "num_base_bdevs_operational": 3, 00:20:45.877 "base_bdevs_list": [ 00:20:45.877 { 00:20:45.877 "name": "spare", 00:20:45.877 "uuid": "3bc18baf-d1dc-5b76-8565-8f5e9fabbb2d", 00:20:45.877 "is_configured": true, 00:20:45.877 "data_offset": 0, 00:20:45.877 "data_size": 65536 00:20:45.877 }, 00:20:45.877 { 00:20:45.877 "name": null, 00:20:45.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.877 "is_configured": false, 00:20:45.877 "data_offset": 0, 00:20:45.877 "data_size": 65536 00:20:45.877 }, 00:20:45.877 { 00:20:45.877 "name": "BaseBdev3", 00:20:45.877 "uuid": "1b17ddd3-6944-513c-a45c-c606ec59bfe4", 00:20:45.877 "is_configured": true, 00:20:45.877 "data_offset": 0, 00:20:45.877 "data_size": 65536 00:20:45.877 }, 00:20:45.877 { 00:20:45.877 "name": "BaseBdev4", 00:20:45.877 "uuid": "abdf4390-edc1-507d-8a95-bc8684aa83e9", 00:20:45.877 "is_configured": true, 00:20:45.877 "data_offset": 0, 00:20:45.877 "data_size": 65536 00:20:45.877 } 00:20:45.877 ] 00:20:45.877 }' 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:45.877 "name": "raid_bdev1", 00:20:45.877 "uuid": "9da71b9b-44bd-4889-a43f-c57400993c58", 00:20:45.877 "strip_size_kb": 0, 00:20:45.877 "state": "online", 00:20:45.877 "raid_level": "raid1", 00:20:45.877 "superblock": false, 00:20:45.877 "num_base_bdevs": 4, 00:20:45.877 "num_base_bdevs_discovered": 3, 00:20:45.877 "num_base_bdevs_operational": 3, 00:20:45.877 "base_bdevs_list": [ 00:20:45.877 { 00:20:45.877 "name": "spare", 00:20:45.877 "uuid": "3bc18baf-d1dc-5b76-8565-8f5e9fabbb2d", 00:20:45.877 "is_configured": true, 00:20:45.877 "data_offset": 0, 00:20:45.877 "data_size": 65536 00:20:45.877 }, 00:20:45.877 { 00:20:45.877 "name": null, 00:20:45.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.877 "is_configured": false, 00:20:45.877 "data_offset": 0, 00:20:45.877 "data_size": 65536 00:20:45.877 }, 00:20:45.877 { 00:20:45.877 "name": "BaseBdev3", 00:20:45.877 "uuid": "1b17ddd3-6944-513c-a45c-c606ec59bfe4", 00:20:45.877 "is_configured": true, 00:20:45.877 "data_offset": 0, 00:20:45.877 "data_size": 65536 00:20:45.877 }, 00:20:45.877 { 00:20:45.877 "name": "BaseBdev4", 00:20:45.877 "uuid": "abdf4390-edc1-507d-8a95-bc8684aa83e9", 00:20:45.877 "is_configured": true, 00:20:45.877 "data_offset": 0, 00:20:45.877 "data_size": 65536 00:20:45.877 } 00:20:45.877 ] 00:20:45.877 }' 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:45.877 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.136 "name": "raid_bdev1", 00:20:46.136 "uuid": "9da71b9b-44bd-4889-a43f-c57400993c58", 00:20:46.136 "strip_size_kb": 0, 00:20:46.136 "state": "online", 00:20:46.136 "raid_level": "raid1", 00:20:46.136 "superblock": false, 00:20:46.136 "num_base_bdevs": 4, 00:20:46.136 "num_base_bdevs_discovered": 3, 00:20:46.136 "num_base_bdevs_operational": 3, 00:20:46.136 "base_bdevs_list": [ 00:20:46.136 { 00:20:46.136 "name": "spare", 00:20:46.136 "uuid": "3bc18baf-d1dc-5b76-8565-8f5e9fabbb2d", 00:20:46.136 "is_configured": true, 00:20:46.136 "data_offset": 0, 00:20:46.136 "data_size": 65536 00:20:46.136 }, 00:20:46.136 { 00:20:46.136 "name": null, 00:20:46.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.136 "is_configured": false, 00:20:46.136 "data_offset": 0, 00:20:46.136 "data_size": 65536 00:20:46.136 }, 00:20:46.136 { 00:20:46.136 "name": "BaseBdev3", 00:20:46.136 "uuid": "1b17ddd3-6944-513c-a45c-c606ec59bfe4", 00:20:46.136 "is_configured": true, 00:20:46.136 "data_offset": 0, 00:20:46.136 "data_size": 65536 00:20:46.136 }, 00:20:46.136 { 00:20:46.136 "name": "BaseBdev4", 00:20:46.136 "uuid": "abdf4390-edc1-507d-8a95-bc8684aa83e9", 00:20:46.136 "is_configured": true, 00:20:46.136 "data_offset": 0, 00:20:46.136 "data_size": 65536 00:20:46.136 } 00:20:46.136 ] 00:20:46.136 }' 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.136 09:20:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:46.704 79.62 IOPS, 238.88 MiB/s [2024-10-15T09:20:30.632Z] 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:46.704 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.704 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:46.704 [2024-10-15 09:20:30.387300] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:46.704 [2024-10-15 09:20:30.387349] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:46.705 00:20:46.705 Latency(us) 00:20:46.705 [2024-10-15T09:20:30.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.705 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:46.705 raid_bdev1 : 8.55 77.10 231.31 0.00 0.00 17562.86 307.20 126782.37 00:20:46.705 [2024-10-15T09:20:30.633Z] =================================================================================================================== 00:20:46.705 [2024-10-15T09:20:30.633Z] Total : 77.10 231.31 0.00 0.00 17562.86 307.20 126782.37 00:20:46.705 [2024-10-15 09:20:30.508863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.705 { 00:20:46.705 "results": [ 00:20:46.705 { 00:20:46.705 "job": "raid_bdev1", 00:20:46.705 "core_mask": "0x1", 00:20:46.705 "workload": "randrw", 00:20:46.705 "percentage": 50, 00:20:46.705 "status": "finished", 00:20:46.705 "queue_depth": 2, 00:20:46.705 "io_size": 3145728, 00:20:46.705 "runtime": 8.547108, 00:20:46.705 "iops": 77.10210283993135, 00:20:46.705 "mibps": 231.30630851979404, 00:20:46.705 "io_failed": 0, 00:20:46.705 "io_timeout": 0, 00:20:46.705 "avg_latency_us": 17562.86287212029, 00:20:46.705 "min_latency_us": 307.2, 00:20:46.705 "max_latency_us": 126782.37090909091 00:20:46.705 } 00:20:46.705 ], 00:20:46.705 "core_count": 1 00:20:46.705 } 00:20:46.705 [2024-10-15 09:20:30.509001] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:46.705 [2024-10-15 09:20:30.509251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:46.705 [2024-10-15 09:20:30.509286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:46.705 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:20:47.272 /dev/nbd0 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:47.272 1+0 records in 00:20:47.272 1+0 records out 00:20:47.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406734 s, 10.1 MB/s 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:47.272 09:20:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:20:47.531 /dev/nbd1 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:47.531 1+0 records in 00:20:47.531 1+0 records out 00:20:47.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390391 s, 10.5 MB/s 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:47.531 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:47.790 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:20:47.790 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:47.790 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:47.790 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:47.790 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:47.790 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:47.790 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:48.048 09:20:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:20:48.307 /dev/nbd1 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:48.307 1+0 records in 00:20:48.307 1+0 records out 00:20:48.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323056 s, 12.7 MB/s 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:48.307 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:48.565 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:20:48.565 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:48.565 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:48.565 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:48.565 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:48.565 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:48.565 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:48.823 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:48.823 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:48.823 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:48.823 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:48.823 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:48.823 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:48.823 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:48.824 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:48.824 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:48.824 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:48.824 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:48.824 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:48.824 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:48.824 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:48.824 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79355 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 79355 ']' 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 79355 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79355 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:49.082 killing process with pid 79355 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79355' 00:20:49.082 Received shutdown signal, test time was about 10.947018 seconds 00:20:49.082 00:20:49.082 Latency(us) 00:20:49.082 [2024-10-15T09:20:33.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.082 [2024-10-15T09:20:33.010Z] =================================================================================================================== 00:20:49.082 [2024-10-15T09:20:33.010Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 79355 00:20:49.082 [2024-10-15 09:20:32.883133] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:49.082 09:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 79355 00:20:49.650 [2024-10-15 09:20:33.289762] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:50.584 09:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:20:50.584 00:20:50.584 real 0m14.741s 00:20:50.584 user 0m19.321s 00:20:50.584 sys 0m1.957s 00:20:50.584 09:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:50.584 ************************************ 00:20:50.584 END TEST raid_rebuild_test_io 00:20:50.584 ************************************ 00:20:50.584 09:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:50.843 09:20:34 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:20:50.843 09:20:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:50.843 09:20:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:50.843 09:20:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:50.843 ************************************ 00:20:50.843 START TEST raid_rebuild_test_sb_io 00:20:50.843 ************************************ 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79782 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79782 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 79782 ']' 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:50.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:50.843 09:20:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:50.843 [2024-10-15 09:20:34.682186] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:20:50.843 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:50.843 Zero copy mechanism will not be used. 00:20:50.843 [2024-10-15 09:20:34.683175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79782 ] 00:20:51.102 [2024-10-15 09:20:34.862734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.102 [2024-10-15 09:20:35.011384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.360 [2024-10-15 09:20:35.237998] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:51.360 [2024-10-15 09:20:35.238048] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:51.927 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:51.927 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:20:51.927 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:51.927 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:51.927 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.927 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:51.927 BaseBdev1_malloc 00:20:51.927 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.927 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:51.927 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.927 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:51.927 [2024-10-15 09:20:35.835641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:51.927 [2024-10-15 09:20:35.835743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.927 [2024-10-15 09:20:35.835781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:51.927 [2024-10-15 09:20:35.835802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.927 [2024-10-15 09:20:35.838823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.927 [2024-10-15 09:20:35.838886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:51.927 BaseBdev1 00:20:51.927 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.927 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:51.928 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:51.928 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.928 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.187 BaseBdev2_malloc 00:20:52.187 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.187 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:52.187 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.187 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.187 [2024-10-15 09:20:35.893335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:52.187 [2024-10-15 09:20:35.893429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.187 [2024-10-15 09:20:35.893460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:52.187 [2024-10-15 09:20:35.893498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.187 [2024-10-15 09:20:35.896452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.187 [2024-10-15 09:20:35.896498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:52.187 BaseBdev2 00:20:52.187 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.187 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:52.187 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:52.187 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.187 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.187 BaseBdev3_malloc 00:20:52.187 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.187 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:52.187 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.187 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.187 [2024-10-15 09:20:35.963961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:52.187 [2024-10-15 09:20:35.964034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.187 [2024-10-15 09:20:35.964071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:52.187 [2024-10-15 09:20:35.964092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.187 [2024-10-15 09:20:35.967162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.187 [2024-10-15 09:20:35.967223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:52.187 BaseBdev3 00:20:52.187 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.187 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:52.187 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:52.187 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.187 09:20:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.187 BaseBdev4_malloc 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.187 [2024-10-15 09:20:36.020678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:52.187 [2024-10-15 09:20:36.020760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.187 [2024-10-15 09:20:36.020794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:52.187 [2024-10-15 09:20:36.020816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.187 [2024-10-15 09:20:36.023916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.187 [2024-10-15 09:20:36.023967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:52.187 BaseBdev4 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.187 spare_malloc 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.187 spare_delay 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.187 [2024-10-15 09:20:36.085402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:52.187 [2024-10-15 09:20:36.085479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.187 [2024-10-15 09:20:36.085511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:52.187 [2024-10-15 09:20:36.085531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.187 [2024-10-15 09:20:36.088459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.187 [2024-10-15 09:20:36.088507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:52.187 spare 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.187 [2024-10-15 09:20:36.093478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:52.187 [2024-10-15 09:20:36.096054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:52.187 [2024-10-15 09:20:36.096196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:52.187 [2024-10-15 09:20:36.096283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:52.187 [2024-10-15 09:20:36.096543] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:52.187 [2024-10-15 09:20:36.096580] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:52.187 [2024-10-15 09:20:36.096907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:52.187 [2024-10-15 09:20:36.097174] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:52.187 [2024-10-15 09:20:36.097204] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:52.187 [2024-10-15 09:20:36.097452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.187 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.446 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.446 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.446 "name": "raid_bdev1", 00:20:52.446 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:20:52.446 "strip_size_kb": 0, 00:20:52.446 "state": "online", 00:20:52.446 "raid_level": "raid1", 00:20:52.446 "superblock": true, 00:20:52.446 "num_base_bdevs": 4, 00:20:52.446 "num_base_bdevs_discovered": 4, 00:20:52.446 "num_base_bdevs_operational": 4, 00:20:52.446 "base_bdevs_list": [ 00:20:52.446 { 00:20:52.446 "name": "BaseBdev1", 00:20:52.446 "uuid": "ef05cc67-09dc-5ee4-9b78-fadee4d77521", 00:20:52.446 "is_configured": true, 00:20:52.446 "data_offset": 2048, 00:20:52.446 "data_size": 63488 00:20:52.446 }, 00:20:52.446 { 00:20:52.446 "name": "BaseBdev2", 00:20:52.446 "uuid": "d0d87bd0-46b3-5190-95ac-11d12112e554", 00:20:52.446 "is_configured": true, 00:20:52.446 "data_offset": 2048, 00:20:52.446 "data_size": 63488 00:20:52.446 }, 00:20:52.446 { 00:20:52.446 "name": "BaseBdev3", 00:20:52.446 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:20:52.446 "is_configured": true, 00:20:52.446 "data_offset": 2048, 00:20:52.446 "data_size": 63488 00:20:52.446 }, 00:20:52.446 { 00:20:52.446 "name": "BaseBdev4", 00:20:52.446 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:20:52.446 "is_configured": true, 00:20:52.446 "data_offset": 2048, 00:20:52.446 "data_size": 63488 00:20:52.447 } 00:20:52.447 ] 00:20:52.447 }' 00:20:52.447 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.447 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.705 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:52.705 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:52.705 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.705 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.705 [2024-10-15 09:20:36.618113] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.964 [2024-10-15 09:20:36.725692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.964 "name": "raid_bdev1", 00:20:52.964 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:20:52.964 "strip_size_kb": 0, 00:20:52.964 "state": "online", 00:20:52.964 "raid_level": "raid1", 00:20:52.964 "superblock": true, 00:20:52.964 "num_base_bdevs": 4, 00:20:52.964 "num_base_bdevs_discovered": 3, 00:20:52.964 "num_base_bdevs_operational": 3, 00:20:52.964 "base_bdevs_list": [ 00:20:52.964 { 00:20:52.964 "name": null, 00:20:52.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.964 "is_configured": false, 00:20:52.964 "data_offset": 0, 00:20:52.964 "data_size": 63488 00:20:52.964 }, 00:20:52.964 { 00:20:52.964 "name": "BaseBdev2", 00:20:52.964 "uuid": "d0d87bd0-46b3-5190-95ac-11d12112e554", 00:20:52.964 "is_configured": true, 00:20:52.964 "data_offset": 2048, 00:20:52.964 "data_size": 63488 00:20:52.964 }, 00:20:52.964 { 00:20:52.964 "name": "BaseBdev3", 00:20:52.964 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:20:52.964 "is_configured": true, 00:20:52.964 "data_offset": 2048, 00:20:52.964 "data_size": 63488 00:20:52.964 }, 00:20:52.964 { 00:20:52.964 "name": "BaseBdev4", 00:20:52.964 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:20:52.964 "is_configured": true, 00:20:52.964 "data_offset": 2048, 00:20:52.964 "data_size": 63488 00:20:52.964 } 00:20:52.964 ] 00:20:52.964 }' 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.964 09:20:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.964 [2024-10-15 09:20:36.854837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:52.964 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:52.964 Zero copy mechanism will not be used. 00:20:52.964 Running I/O for 60 seconds... 00:20:53.535 09:20:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:53.535 09:20:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.535 09:20:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:53.535 [2024-10-15 09:20:37.265844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:53.535 09:20:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.535 09:20:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:53.535 [2024-10-15 09:20:37.343428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:20:53.535 [2024-10-15 09:20:37.346468] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:53.535 [2024-10-15 09:20:37.460520] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:53.794 [2024-10-15 09:20:37.462872] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:53.794 [2024-10-15 09:20:37.689234] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:53.794 [2024-10-15 09:20:37.690386] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:54.312 131.00 IOPS, 393.00 MiB/s [2024-10-15T09:20:38.240Z] [2024-10-15 09:20:38.135606] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:54.312 [2024-10-15 09:20:38.136744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:54.571 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:54.571 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:54.571 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:54.571 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:54.571 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:54.571 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.571 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.571 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.571 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:54.571 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.571 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:54.571 "name": "raid_bdev1", 00:20:54.571 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:20:54.571 "strip_size_kb": 0, 00:20:54.571 "state": "online", 00:20:54.571 "raid_level": "raid1", 00:20:54.571 "superblock": true, 00:20:54.571 "num_base_bdevs": 4, 00:20:54.571 "num_base_bdevs_discovered": 4, 00:20:54.571 "num_base_bdevs_operational": 4, 00:20:54.571 "process": { 00:20:54.571 "type": "rebuild", 00:20:54.571 "target": "spare", 00:20:54.571 "progress": { 00:20:54.571 "blocks": 10240, 00:20:54.571 "percent": 16 00:20:54.571 } 00:20:54.571 }, 00:20:54.571 "base_bdevs_list": [ 00:20:54.571 { 00:20:54.571 "name": "spare", 00:20:54.571 "uuid": "cebbc231-35e1-570a-a230-53c9942a033f", 00:20:54.571 "is_configured": true, 00:20:54.571 "data_offset": 2048, 00:20:54.571 "data_size": 63488 00:20:54.571 }, 00:20:54.571 { 00:20:54.571 "name": "BaseBdev2", 00:20:54.571 "uuid": "d0d87bd0-46b3-5190-95ac-11d12112e554", 00:20:54.571 "is_configured": true, 00:20:54.571 "data_offset": 2048, 00:20:54.571 "data_size": 63488 00:20:54.571 }, 00:20:54.571 { 00:20:54.571 "name": "BaseBdev3", 00:20:54.571 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:20:54.571 "is_configured": true, 00:20:54.571 "data_offset": 2048, 00:20:54.571 "data_size": 63488 00:20:54.571 }, 00:20:54.571 { 00:20:54.571 "name": "BaseBdev4", 00:20:54.571 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:20:54.571 "is_configured": true, 00:20:54.571 "data_offset": 2048, 00:20:54.571 "data_size": 63488 00:20:54.571 } 00:20:54.571 ] 00:20:54.571 }' 00:20:54.571 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:54.571 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:54.571 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:54.571 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:54.571 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:54.571 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.571 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:54.571 [2024-10-15 09:20:38.484499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:54.571 [2024-10-15 09:20:38.484697] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:54.830 [2024-10-15 09:20:38.593103] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:54.830 [2024-10-15 09:20:38.608846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.830 [2024-10-15 09:20:38.608982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:54.830 [2024-10-15 09:20:38.609007] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:54.830 [2024-10-15 09:20:38.626407] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.830 "name": "raid_bdev1", 00:20:54.830 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:20:54.830 "strip_size_kb": 0, 00:20:54.830 "state": "online", 00:20:54.830 "raid_level": "raid1", 00:20:54.830 "superblock": true, 00:20:54.830 "num_base_bdevs": 4, 00:20:54.830 "num_base_bdevs_discovered": 3, 00:20:54.830 "num_base_bdevs_operational": 3, 00:20:54.830 "base_bdevs_list": [ 00:20:54.830 { 00:20:54.830 "name": null, 00:20:54.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.830 "is_configured": false, 00:20:54.830 "data_offset": 0, 00:20:54.830 "data_size": 63488 00:20:54.830 }, 00:20:54.830 { 00:20:54.830 "name": "BaseBdev2", 00:20:54.830 "uuid": "d0d87bd0-46b3-5190-95ac-11d12112e554", 00:20:54.830 "is_configured": true, 00:20:54.830 "data_offset": 2048, 00:20:54.830 "data_size": 63488 00:20:54.830 }, 00:20:54.830 { 00:20:54.830 "name": "BaseBdev3", 00:20:54.830 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:20:54.830 "is_configured": true, 00:20:54.830 "data_offset": 2048, 00:20:54.830 "data_size": 63488 00:20:54.830 }, 00:20:54.830 { 00:20:54.830 "name": "BaseBdev4", 00:20:54.830 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:20:54.830 "is_configured": true, 00:20:54.830 "data_offset": 2048, 00:20:54.830 "data_size": 63488 00:20:54.830 } 00:20:54.830 ] 00:20:54.830 }' 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.830 09:20:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:55.348 110.00 IOPS, 330.00 MiB/s [2024-10-15T09:20:39.277Z] 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:55.349 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.349 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:55.349 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:55.349 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.349 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.349 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.349 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:55.349 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.349 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.349 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.349 "name": "raid_bdev1", 00:20:55.349 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:20:55.349 "strip_size_kb": 0, 00:20:55.349 "state": "online", 00:20:55.349 "raid_level": "raid1", 00:20:55.349 "superblock": true, 00:20:55.349 "num_base_bdevs": 4, 00:20:55.349 "num_base_bdevs_discovered": 3, 00:20:55.349 "num_base_bdevs_operational": 3, 00:20:55.349 "base_bdevs_list": [ 00:20:55.349 { 00:20:55.349 "name": null, 00:20:55.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.349 "is_configured": false, 00:20:55.349 "data_offset": 0, 00:20:55.349 "data_size": 63488 00:20:55.349 }, 00:20:55.349 { 00:20:55.349 "name": "BaseBdev2", 00:20:55.349 "uuid": "d0d87bd0-46b3-5190-95ac-11d12112e554", 00:20:55.349 "is_configured": true, 00:20:55.349 "data_offset": 2048, 00:20:55.349 "data_size": 63488 00:20:55.349 }, 00:20:55.349 { 00:20:55.349 "name": "BaseBdev3", 00:20:55.349 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:20:55.349 "is_configured": true, 00:20:55.349 "data_offset": 2048, 00:20:55.349 "data_size": 63488 00:20:55.349 }, 00:20:55.349 { 00:20:55.349 "name": "BaseBdev4", 00:20:55.349 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:20:55.349 "is_configured": true, 00:20:55.349 "data_offset": 2048, 00:20:55.349 "data_size": 63488 00:20:55.349 } 00:20:55.349 ] 00:20:55.349 }' 00:20:55.349 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.608 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:55.608 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.608 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:55.608 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:55.608 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.608 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:55.608 [2024-10-15 09:20:39.359624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:55.608 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.608 09:20:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:55.608 [2024-10-15 09:20:39.463331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:55.608 [2024-10-15 09:20:39.466515] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:55.867 [2024-10-15 09:20:39.608186] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:55.867 [2024-10-15 09:20:39.610438] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:56.384 119.67 IOPS, 359.00 MiB/s [2024-10-15T09:20:40.312Z] [2024-10-15 09:20:40.117435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:56.384 [2024-10-15 09:20:40.118327] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:56.708 [2024-10-15 09:20:40.341222] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:56.708 [2024-10-15 09:20:40.341698] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:56.708 "name": "raid_bdev1", 00:20:56.708 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:20:56.708 "strip_size_kb": 0, 00:20:56.708 "state": "online", 00:20:56.708 "raid_level": "raid1", 00:20:56.708 "superblock": true, 00:20:56.708 "num_base_bdevs": 4, 00:20:56.708 "num_base_bdevs_discovered": 4, 00:20:56.708 "num_base_bdevs_operational": 4, 00:20:56.708 "process": { 00:20:56.708 "type": "rebuild", 00:20:56.708 "target": "spare", 00:20:56.708 "progress": { 00:20:56.708 "blocks": 10240, 00:20:56.708 "percent": 16 00:20:56.708 } 00:20:56.708 }, 00:20:56.708 "base_bdevs_list": [ 00:20:56.708 { 00:20:56.708 "name": "spare", 00:20:56.708 "uuid": "cebbc231-35e1-570a-a230-53c9942a033f", 00:20:56.708 "is_configured": true, 00:20:56.708 "data_offset": 2048, 00:20:56.708 "data_size": 63488 00:20:56.708 }, 00:20:56.708 { 00:20:56.708 "name": "BaseBdev2", 00:20:56.708 "uuid": "d0d87bd0-46b3-5190-95ac-11d12112e554", 00:20:56.708 "is_configured": true, 00:20:56.708 "data_offset": 2048, 00:20:56.708 "data_size": 63488 00:20:56.708 }, 00:20:56.708 { 00:20:56.708 "name": "BaseBdev3", 00:20:56.708 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:20:56.708 "is_configured": true, 00:20:56.708 "data_offset": 2048, 00:20:56.708 "data_size": 63488 00:20:56.708 }, 00:20:56.708 { 00:20:56.708 "name": "BaseBdev4", 00:20:56.708 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:20:56.708 "is_configured": true, 00:20:56.708 "data_offset": 2048, 00:20:56.708 "data_size": 63488 00:20:56.708 } 00:20:56.708 ] 00:20:56.708 }' 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:56.708 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.708 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:56.708 [2024-10-15 09:20:40.564790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:56.966 [2024-10-15 09:20:40.686485] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:57.225 102.00 IOPS, 306.00 MiB/s [2024-10-15T09:20:41.153Z] [2024-10-15 09:20:40.907926] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:20:57.225 [2024-10-15 09:20:40.908004] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:20:57.225 [2024-10-15 09:20:40.914744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:57.225 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.225 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:20:57.225 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:20:57.225 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.225 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.225 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.225 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.225 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.225 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.225 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.225 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:57.225 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.225 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.225 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.225 "name": "raid_bdev1", 00:20:57.225 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:20:57.225 "strip_size_kb": 0, 00:20:57.225 "state": "online", 00:20:57.225 "raid_level": "raid1", 00:20:57.225 "superblock": true, 00:20:57.225 "num_base_bdevs": 4, 00:20:57.225 "num_base_bdevs_discovered": 3, 00:20:57.225 "num_base_bdevs_operational": 3, 00:20:57.225 "process": { 00:20:57.225 "type": "rebuild", 00:20:57.225 "target": "spare", 00:20:57.225 "progress": { 00:20:57.225 "blocks": 14336, 00:20:57.225 "percent": 22 00:20:57.225 } 00:20:57.225 }, 00:20:57.225 "base_bdevs_list": [ 00:20:57.225 { 00:20:57.225 "name": "spare", 00:20:57.225 "uuid": "cebbc231-35e1-570a-a230-53c9942a033f", 00:20:57.225 "is_configured": true, 00:20:57.225 "data_offset": 2048, 00:20:57.225 "data_size": 63488 00:20:57.225 }, 00:20:57.225 { 00:20:57.225 "name": null, 00:20:57.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.225 "is_configured": false, 00:20:57.225 "data_offset": 0, 00:20:57.225 "data_size": 63488 00:20:57.225 }, 00:20:57.225 { 00:20:57.225 "name": "BaseBdev3", 00:20:57.225 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:20:57.225 "is_configured": true, 00:20:57.225 "data_offset": 2048, 00:20:57.225 "data_size": 63488 00:20:57.225 }, 00:20:57.225 { 00:20:57.225 "name": "BaseBdev4", 00:20:57.226 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:20:57.226 "is_configured": true, 00:20:57.226 "data_offset": 2048, 00:20:57.226 "data_size": 63488 00:20:57.226 } 00:20:57.226 ] 00:20:57.226 }' 00:20:57.226 09:20:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.226 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.226 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.226 [2024-10-15 09:20:41.054461] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:57.226 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.226 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=552 00:20:57.226 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:57.226 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.226 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.226 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.226 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.226 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.226 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.226 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.226 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:57.226 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.226 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.226 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.226 "name": "raid_bdev1", 00:20:57.226 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:20:57.226 "strip_size_kb": 0, 00:20:57.226 "state": "online", 00:20:57.226 "raid_level": "raid1", 00:20:57.226 "superblock": true, 00:20:57.226 "num_base_bdevs": 4, 00:20:57.226 "num_base_bdevs_discovered": 3, 00:20:57.226 "num_base_bdevs_operational": 3, 00:20:57.226 "process": { 00:20:57.226 "type": "rebuild", 00:20:57.226 "target": "spare", 00:20:57.226 "progress": { 00:20:57.226 "blocks": 16384, 00:20:57.226 "percent": 25 00:20:57.226 } 00:20:57.226 }, 00:20:57.226 "base_bdevs_list": [ 00:20:57.226 { 00:20:57.226 "name": "spare", 00:20:57.226 "uuid": "cebbc231-35e1-570a-a230-53c9942a033f", 00:20:57.226 "is_configured": true, 00:20:57.226 "data_offset": 2048, 00:20:57.226 "data_size": 63488 00:20:57.226 }, 00:20:57.226 { 00:20:57.226 "name": null, 00:20:57.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.226 "is_configured": false, 00:20:57.226 "data_offset": 0, 00:20:57.226 "data_size": 63488 00:20:57.226 }, 00:20:57.226 { 00:20:57.226 "name": "BaseBdev3", 00:20:57.226 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:20:57.226 "is_configured": true, 00:20:57.226 "data_offset": 2048, 00:20:57.226 "data_size": 63488 00:20:57.226 }, 00:20:57.226 { 00:20:57.226 "name": "BaseBdev4", 00:20:57.226 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:20:57.226 "is_configured": true, 00:20:57.226 "data_offset": 2048, 00:20:57.226 "data_size": 63488 00:20:57.226 } 00:20:57.226 ] 00:20:57.226 }' 00:20:57.226 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.484 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.484 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.484 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.484 09:20:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:57.484 [2024-10-15 09:20:41.408376] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:57.484 [2024-10-15 09:20:41.409252] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:58.052 95.00 IOPS, 285.00 MiB/s [2024-10-15T09:20:41.980Z] [2024-10-15 09:20:41.887925] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:58.311 [2024-10-15 09:20:42.226919] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:58.570 09:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:58.570 09:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.570 09:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:58.570 09:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:58.570 09:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:58.570 09:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:58.570 09:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.570 09:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.570 09:20:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.571 09:20:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:58.571 09:20:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.571 09:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.571 "name": "raid_bdev1", 00:20:58.571 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:20:58.571 "strip_size_kb": 0, 00:20:58.571 "state": "online", 00:20:58.571 "raid_level": "raid1", 00:20:58.571 "superblock": true, 00:20:58.571 "num_base_bdevs": 4, 00:20:58.571 "num_base_bdevs_discovered": 3, 00:20:58.571 "num_base_bdevs_operational": 3, 00:20:58.571 "process": { 00:20:58.571 "type": "rebuild", 00:20:58.571 "target": "spare", 00:20:58.571 "progress": { 00:20:58.571 "blocks": 32768, 00:20:58.571 "percent": 51 00:20:58.571 } 00:20:58.571 }, 00:20:58.571 "base_bdevs_list": [ 00:20:58.571 { 00:20:58.571 "name": "spare", 00:20:58.571 "uuid": "cebbc231-35e1-570a-a230-53c9942a033f", 00:20:58.571 "is_configured": true, 00:20:58.571 "data_offset": 2048, 00:20:58.571 "data_size": 63488 00:20:58.571 }, 00:20:58.571 { 00:20:58.571 "name": null, 00:20:58.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.571 "is_configured": false, 00:20:58.571 "data_offset": 0, 00:20:58.571 "data_size": 63488 00:20:58.571 }, 00:20:58.571 { 00:20:58.571 "name": "BaseBdev3", 00:20:58.571 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:20:58.571 "is_configured": true, 00:20:58.571 "data_offset": 2048, 00:20:58.571 "data_size": 63488 00:20:58.571 }, 00:20:58.571 { 00:20:58.571 "name": "BaseBdev4", 00:20:58.571 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:20:58.571 "is_configured": true, 00:20:58.571 "data_offset": 2048, 00:20:58.571 "data_size": 63488 00:20:58.571 } 00:20:58.571 ] 00:20:58.571 }' 00:20:58.571 09:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:58.571 09:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:58.571 09:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.571 [2024-10-15 09:20:42.359188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:58.571 09:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.571 09:20:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:59.145 [2024-10-15 09:20:42.791073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:59.145 87.17 IOPS, 261.50 MiB/s [2024-10-15T09:20:43.073Z] [2024-10-15 09:20:43.015182] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:20:59.404 [2024-10-15 09:20:43.221817] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:20:59.662 09:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:59.662 09:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.662 09:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.662 09:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:59.662 09:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:59.662 09:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.662 09:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.662 09:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.662 09:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:59.662 09:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.662 09:20:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.662 [2024-10-15 09:20:43.447879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:59.662 09:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.662 "name": "raid_bdev1", 00:20:59.662 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:20:59.662 "strip_size_kb": 0, 00:20:59.662 "state": "online", 00:20:59.662 "raid_level": "raid1", 00:20:59.662 "superblock": true, 00:20:59.662 "num_base_bdevs": 4, 00:20:59.662 "num_base_bdevs_discovered": 3, 00:20:59.662 "num_base_bdevs_operational": 3, 00:20:59.662 "process": { 00:20:59.662 "type": "rebuild", 00:20:59.662 "target": "spare", 00:20:59.662 "progress": { 00:20:59.662 "blocks": 49152, 00:20:59.662 "percent": 77 00:20:59.662 } 00:20:59.662 }, 00:20:59.662 "base_bdevs_list": [ 00:20:59.662 { 00:20:59.662 "name": "spare", 00:20:59.662 "uuid": "cebbc231-35e1-570a-a230-53c9942a033f", 00:20:59.662 "is_configured": true, 00:20:59.662 "data_offset": 2048, 00:20:59.662 "data_size": 63488 00:20:59.662 }, 00:20:59.662 { 00:20:59.662 "name": null, 00:20:59.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.662 "is_configured": false, 00:20:59.662 "data_offset": 0, 00:20:59.662 "data_size": 63488 00:20:59.662 }, 00:20:59.662 { 00:20:59.662 "name": "BaseBdev3", 00:20:59.662 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:20:59.662 "is_configured": true, 00:20:59.662 "data_offset": 2048, 00:20:59.662 "data_size": 63488 00:20:59.662 }, 00:20:59.662 { 00:20:59.662 "name": "BaseBdev4", 00:20:59.662 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:20:59.662 "is_configured": true, 00:20:59.662 "data_offset": 2048, 00:20:59.662 "data_size": 63488 00:20:59.662 } 00:20:59.662 ] 00:20:59.662 }' 00:20:59.662 09:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.662 09:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.662 09:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.662 09:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.662 09:20:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:59.662 [2024-10-15 09:20:43.571618] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:00.488 79.57 IOPS, 238.71 MiB/s [2024-10-15T09:20:44.416Z] [2024-10-15 09:20:44.272692] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:00.488 [2024-10-15 09:20:44.372791] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:00.488 [2024-10-15 09:20:44.377134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.746 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:00.746 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.746 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.746 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:00.746 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:00.746 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.746 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.746 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.746 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:00.746 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.746 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.746 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.746 "name": "raid_bdev1", 00:21:00.746 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:21:00.746 "strip_size_kb": 0, 00:21:00.746 "state": "online", 00:21:00.746 "raid_level": "raid1", 00:21:00.746 "superblock": true, 00:21:00.746 "num_base_bdevs": 4, 00:21:00.746 "num_base_bdevs_discovered": 3, 00:21:00.746 "num_base_bdevs_operational": 3, 00:21:00.746 "base_bdevs_list": [ 00:21:00.746 { 00:21:00.746 "name": "spare", 00:21:00.746 "uuid": "cebbc231-35e1-570a-a230-53c9942a033f", 00:21:00.746 "is_configured": true, 00:21:00.746 "data_offset": 2048, 00:21:00.746 "data_size": 63488 00:21:00.746 }, 00:21:00.746 { 00:21:00.746 "name": null, 00:21:00.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.746 "is_configured": false, 00:21:00.746 "data_offset": 0, 00:21:00.746 "data_size": 63488 00:21:00.746 }, 00:21:00.746 { 00:21:00.746 "name": "BaseBdev3", 00:21:00.746 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:21:00.746 "is_configured": true, 00:21:00.746 "data_offset": 2048, 00:21:00.746 "data_size": 63488 00:21:00.746 }, 00:21:00.746 { 00:21:00.746 "name": "BaseBdev4", 00:21:00.746 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:21:00.746 "is_configured": true, 00:21:00.746 "data_offset": 2048, 00:21:00.746 "data_size": 63488 00:21:00.746 } 00:21:00.746 ] 00:21:00.746 }' 00:21:00.746 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:01.005 "name": "raid_bdev1", 00:21:01.005 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:21:01.005 "strip_size_kb": 0, 00:21:01.005 "state": "online", 00:21:01.005 "raid_level": "raid1", 00:21:01.005 "superblock": true, 00:21:01.005 "num_base_bdevs": 4, 00:21:01.005 "num_base_bdevs_discovered": 3, 00:21:01.005 "num_base_bdevs_operational": 3, 00:21:01.005 "base_bdevs_list": [ 00:21:01.005 { 00:21:01.005 "name": "spare", 00:21:01.005 "uuid": "cebbc231-35e1-570a-a230-53c9942a033f", 00:21:01.005 "is_configured": true, 00:21:01.005 "data_offset": 2048, 00:21:01.005 "data_size": 63488 00:21:01.005 }, 00:21:01.005 { 00:21:01.005 "name": null, 00:21:01.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.005 "is_configured": false, 00:21:01.005 "data_offset": 0, 00:21:01.005 "data_size": 63488 00:21:01.005 }, 00:21:01.005 { 00:21:01.005 "name": "BaseBdev3", 00:21:01.005 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:21:01.005 "is_configured": true, 00:21:01.005 "data_offset": 2048, 00:21:01.005 "data_size": 63488 00:21:01.005 }, 00:21:01.005 { 00:21:01.005 "name": "BaseBdev4", 00:21:01.005 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:21:01.005 "is_configured": true, 00:21:01.005 "data_offset": 2048, 00:21:01.005 "data_size": 63488 00:21:01.005 } 00:21:01.005 ] 00:21:01.005 }' 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:01.005 74.25 IOPS, 222.75 MiB/s [2024-10-15T09:20:44.933Z] 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.005 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.006 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.006 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.006 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.006 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:01.006 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.264 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.264 "name": "raid_bdev1", 00:21:01.264 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:21:01.264 "strip_size_kb": 0, 00:21:01.264 "state": "online", 00:21:01.264 "raid_level": "raid1", 00:21:01.264 "superblock": true, 00:21:01.264 "num_base_bdevs": 4, 00:21:01.264 "num_base_bdevs_discovered": 3, 00:21:01.264 "num_base_bdevs_operational": 3, 00:21:01.264 "base_bdevs_list": [ 00:21:01.264 { 00:21:01.264 "name": "spare", 00:21:01.264 "uuid": "cebbc231-35e1-570a-a230-53c9942a033f", 00:21:01.264 "is_configured": true, 00:21:01.264 "data_offset": 2048, 00:21:01.264 "data_size": 63488 00:21:01.264 }, 00:21:01.264 { 00:21:01.264 "name": null, 00:21:01.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.264 "is_configured": false, 00:21:01.264 "data_offset": 0, 00:21:01.264 "data_size": 63488 00:21:01.264 }, 00:21:01.264 { 00:21:01.264 "name": "BaseBdev3", 00:21:01.264 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:21:01.264 "is_configured": true, 00:21:01.264 "data_offset": 2048, 00:21:01.264 "data_size": 63488 00:21:01.264 }, 00:21:01.264 { 00:21:01.264 "name": "BaseBdev4", 00:21:01.264 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:21:01.264 "is_configured": true, 00:21:01.264 "data_offset": 2048, 00:21:01.264 "data_size": 63488 00:21:01.264 } 00:21:01.264 ] 00:21:01.264 }' 00:21:01.264 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.264 09:20:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:01.522 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:01.522 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.522 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:01.522 [2024-10-15 09:20:45.402336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:01.522 [2024-10-15 09:20:45.402382] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:01.781 00:21:01.781 Latency(us) 00:21:01.781 [2024-10-15T09:20:45.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.781 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:01.781 raid_bdev1 : 8.64 71.41 214.23 0.00 0.00 17336.78 320.23 118203.11 00:21:01.781 [2024-10-15T09:20:45.709Z] =================================================================================================================== 00:21:01.781 [2024-10-15T09:20:45.709Z] Total : 71.41 214.23 0.00 0.00 17336.78 320.23 118203.11 00:21:01.781 [2024-10-15 09:20:45.519105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:01.781 [2024-10-15 09:20:45.519235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:01.781 [2024-10-15 09:20:45.519379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:01.781 [2024-10-15 09:20:45.519403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:01.781 { 00:21:01.781 "results": [ 00:21:01.781 { 00:21:01.781 "job": "raid_bdev1", 00:21:01.781 "core_mask": "0x1", 00:21:01.781 "workload": "randrw", 00:21:01.781 "percentage": 50, 00:21:01.781 "status": "finished", 00:21:01.781 "queue_depth": 2, 00:21:01.781 "io_size": 3145728, 00:21:01.781 "runtime": 8.640434, 00:21:01.781 "iops": 71.40845008479899, 00:21:01.781 "mibps": 214.22535025439697, 00:21:01.781 "io_failed": 0, 00:21:01.781 "io_timeout": 0, 00:21:01.781 "avg_latency_us": 17336.778665095037, 00:21:01.781 "min_latency_us": 320.2327272727273, 00:21:01.781 "max_latency_us": 118203.11272727273 00:21:01.781 } 00:21:01.781 ], 00:21:01.781 "core_count": 1 00:21:01.781 } 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:01.781 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:21:02.039 /dev/nbd0 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:02.039 1+0 records in 00:21:02.039 1+0 records out 00:21:02.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338957 s, 12.1 MB/s 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:02.039 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:21:02.040 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:21:02.040 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:02.040 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:02.040 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:02.040 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:02.040 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:02.040 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:02.040 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:02.040 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:02.040 09:20:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:02.298 /dev/nbd1 00:21:02.298 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:02.298 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:02.298 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:21:02.298 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:21:02.298 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:02.298 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:02.298 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:21:02.298 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:21:02.298 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:02.298 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:02.298 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:02.298 1+0 records in 00:21:02.298 1+0 records out 00:21:02.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310265 s, 13.2 MB/s 00:21:02.298 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:02.556 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:21:02.556 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:02.556 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:02.556 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:21:02.556 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:02.556 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:02.556 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:02.556 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:21:02.556 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:02.556 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:02.556 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:02.556 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:02.556 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:02.556 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:02.815 09:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:03.073 /dev/nbd1 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:03.332 1+0 records in 00:21:03.332 1+0 records out 00:21:03.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404259 s, 10.1 MB/s 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:03.332 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:03.590 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:03.590 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:03.590 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:03.590 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:03.590 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:03.590 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:03.590 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:03.590 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:03.590 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:03.590 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:03.590 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:03.590 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:03.590 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:03.590 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:03.590 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:03.848 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:03.848 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:03.848 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:03.848 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:03.848 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:03.848 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:03.848 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:03.848 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:03.848 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:03.848 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:03.848 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.848 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:03.848 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.848 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:03.848 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.848 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:03.848 [2024-10-15 09:20:47.723996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:03.848 [2024-10-15 09:20:47.724095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.848 [2024-10-15 09:20:47.724179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:03.848 [2024-10-15 09:20:47.724238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.848 [2024-10-15 09:20:47.727796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.848 [2024-10-15 09:20:47.727855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:03.848 [2024-10-15 09:20:47.728070] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:03.849 [2024-10-15 09:20:47.728213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:03.849 [2024-10-15 09:20:47.728557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:03.849 [2024-10-15 09:20:47.728797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:03.849 spare 00:21:03.849 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.849 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:03.849 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.849 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.107 [2024-10-15 09:20:47.829015] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:04.107 [2024-10-15 09:20:47.829150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:04.107 [2024-10-15 09:20:47.829784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:21:04.107 [2024-10-15 09:20:47.830224] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:04.107 [2024-10-15 09:20:47.830254] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:04.107 [2024-10-15 09:20:47.830639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.107 "name": "raid_bdev1", 00:21:04.107 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:21:04.107 "strip_size_kb": 0, 00:21:04.107 "state": "online", 00:21:04.107 "raid_level": "raid1", 00:21:04.107 "superblock": true, 00:21:04.107 "num_base_bdevs": 4, 00:21:04.107 "num_base_bdevs_discovered": 3, 00:21:04.107 "num_base_bdevs_operational": 3, 00:21:04.107 "base_bdevs_list": [ 00:21:04.107 { 00:21:04.107 "name": "spare", 00:21:04.107 "uuid": "cebbc231-35e1-570a-a230-53c9942a033f", 00:21:04.107 "is_configured": true, 00:21:04.107 "data_offset": 2048, 00:21:04.107 "data_size": 63488 00:21:04.107 }, 00:21:04.107 { 00:21:04.107 "name": null, 00:21:04.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.107 "is_configured": false, 00:21:04.107 "data_offset": 2048, 00:21:04.107 "data_size": 63488 00:21:04.107 }, 00:21:04.107 { 00:21:04.107 "name": "BaseBdev3", 00:21:04.107 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:21:04.107 "is_configured": true, 00:21:04.107 "data_offset": 2048, 00:21:04.107 "data_size": 63488 00:21:04.107 }, 00:21:04.107 { 00:21:04.107 "name": "BaseBdev4", 00:21:04.107 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:21:04.107 "is_configured": true, 00:21:04.107 "data_offset": 2048, 00:21:04.107 "data_size": 63488 00:21:04.107 } 00:21:04.107 ] 00:21:04.107 }' 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.107 09:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.674 "name": "raid_bdev1", 00:21:04.674 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:21:04.674 "strip_size_kb": 0, 00:21:04.674 "state": "online", 00:21:04.674 "raid_level": "raid1", 00:21:04.674 "superblock": true, 00:21:04.674 "num_base_bdevs": 4, 00:21:04.674 "num_base_bdevs_discovered": 3, 00:21:04.674 "num_base_bdevs_operational": 3, 00:21:04.674 "base_bdevs_list": [ 00:21:04.674 { 00:21:04.674 "name": "spare", 00:21:04.674 "uuid": "cebbc231-35e1-570a-a230-53c9942a033f", 00:21:04.674 "is_configured": true, 00:21:04.674 "data_offset": 2048, 00:21:04.674 "data_size": 63488 00:21:04.674 }, 00:21:04.674 { 00:21:04.674 "name": null, 00:21:04.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.674 "is_configured": false, 00:21:04.674 "data_offset": 2048, 00:21:04.674 "data_size": 63488 00:21:04.674 }, 00:21:04.674 { 00:21:04.674 "name": "BaseBdev3", 00:21:04.674 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:21:04.674 "is_configured": true, 00:21:04.674 "data_offset": 2048, 00:21:04.674 "data_size": 63488 00:21:04.674 }, 00:21:04.674 { 00:21:04.674 "name": "BaseBdev4", 00:21:04.674 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:21:04.674 "is_configured": true, 00:21:04.674 "data_offset": 2048, 00:21:04.674 "data_size": 63488 00:21:04.674 } 00:21:04.674 ] 00:21:04.674 }' 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.674 [2024-10-15 09:20:48.564660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.674 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.933 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.933 "name": "raid_bdev1", 00:21:04.933 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:21:04.933 "strip_size_kb": 0, 00:21:04.933 "state": "online", 00:21:04.933 "raid_level": "raid1", 00:21:04.933 "superblock": true, 00:21:04.933 "num_base_bdevs": 4, 00:21:04.933 "num_base_bdevs_discovered": 2, 00:21:04.933 "num_base_bdevs_operational": 2, 00:21:04.933 "base_bdevs_list": [ 00:21:04.933 { 00:21:04.933 "name": null, 00:21:04.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.933 "is_configured": false, 00:21:04.933 "data_offset": 0, 00:21:04.933 "data_size": 63488 00:21:04.933 }, 00:21:04.933 { 00:21:04.933 "name": null, 00:21:04.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.933 "is_configured": false, 00:21:04.933 "data_offset": 2048, 00:21:04.933 "data_size": 63488 00:21:04.933 }, 00:21:04.933 { 00:21:04.933 "name": "BaseBdev3", 00:21:04.933 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:21:04.933 "is_configured": true, 00:21:04.933 "data_offset": 2048, 00:21:04.933 "data_size": 63488 00:21:04.933 }, 00:21:04.933 { 00:21:04.933 "name": "BaseBdev4", 00:21:04.933 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:21:04.933 "is_configured": true, 00:21:04.933 "data_offset": 2048, 00:21:04.933 "data_size": 63488 00:21:04.933 } 00:21:04.933 ] 00:21:04.933 }' 00:21:04.933 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.933 09:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:05.192 09:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:05.192 09:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.192 09:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:05.450 [2024-10-15 09:20:49.120964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:05.451 [2024-10-15 09:20:49.121284] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:05.451 [2024-10-15 09:20:49.121329] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:05.451 [2024-10-15 09:20:49.121387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:05.451 [2024-10-15 09:20:49.136371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:21:05.451 09:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.451 09:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:05.451 [2024-10-15 09:20:49.139139] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:06.386 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.386 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:06.386 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:06.386 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:06.386 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:06.386 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.387 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.387 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.387 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:06.387 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.387 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:06.387 "name": "raid_bdev1", 00:21:06.387 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:21:06.387 "strip_size_kb": 0, 00:21:06.387 "state": "online", 00:21:06.387 "raid_level": "raid1", 00:21:06.387 "superblock": true, 00:21:06.387 "num_base_bdevs": 4, 00:21:06.387 "num_base_bdevs_discovered": 3, 00:21:06.387 "num_base_bdevs_operational": 3, 00:21:06.387 "process": { 00:21:06.387 "type": "rebuild", 00:21:06.387 "target": "spare", 00:21:06.387 "progress": { 00:21:06.387 "blocks": 20480, 00:21:06.387 "percent": 32 00:21:06.387 } 00:21:06.387 }, 00:21:06.387 "base_bdevs_list": [ 00:21:06.387 { 00:21:06.387 "name": "spare", 00:21:06.387 "uuid": "cebbc231-35e1-570a-a230-53c9942a033f", 00:21:06.387 "is_configured": true, 00:21:06.387 "data_offset": 2048, 00:21:06.387 "data_size": 63488 00:21:06.387 }, 00:21:06.387 { 00:21:06.387 "name": null, 00:21:06.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.387 "is_configured": false, 00:21:06.387 "data_offset": 2048, 00:21:06.387 "data_size": 63488 00:21:06.387 }, 00:21:06.387 { 00:21:06.387 "name": "BaseBdev3", 00:21:06.387 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:21:06.387 "is_configured": true, 00:21:06.387 "data_offset": 2048, 00:21:06.387 "data_size": 63488 00:21:06.387 }, 00:21:06.387 { 00:21:06.387 "name": "BaseBdev4", 00:21:06.387 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:21:06.387 "is_configured": true, 00:21:06.387 "data_offset": 2048, 00:21:06.387 "data_size": 63488 00:21:06.387 } 00:21:06.387 ] 00:21:06.387 }' 00:21:06.387 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:06.387 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:06.387 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:06.387 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.387 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:06.387 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.387 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:06.387 [2024-10-15 09:20:50.313203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:06.645 [2024-10-15 09:20:50.350915] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:06.645 [2024-10-15 09:20:50.351025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.645 [2024-10-15 09:20:50.351051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:06.645 [2024-10-15 09:20:50.351067] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:06.645 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.645 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:06.645 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.645 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.645 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.645 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.645 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:06.645 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.645 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.645 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.645 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.645 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.645 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.645 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.645 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:06.645 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.645 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.645 "name": "raid_bdev1", 00:21:06.645 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:21:06.645 "strip_size_kb": 0, 00:21:06.645 "state": "online", 00:21:06.645 "raid_level": "raid1", 00:21:06.645 "superblock": true, 00:21:06.645 "num_base_bdevs": 4, 00:21:06.645 "num_base_bdevs_discovered": 2, 00:21:06.645 "num_base_bdevs_operational": 2, 00:21:06.645 "base_bdevs_list": [ 00:21:06.645 { 00:21:06.645 "name": null, 00:21:06.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.645 "is_configured": false, 00:21:06.645 "data_offset": 0, 00:21:06.645 "data_size": 63488 00:21:06.645 }, 00:21:06.645 { 00:21:06.645 "name": null, 00:21:06.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.645 "is_configured": false, 00:21:06.645 "data_offset": 2048, 00:21:06.645 "data_size": 63488 00:21:06.645 }, 00:21:06.645 { 00:21:06.645 "name": "BaseBdev3", 00:21:06.646 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:21:06.646 "is_configured": true, 00:21:06.646 "data_offset": 2048, 00:21:06.646 "data_size": 63488 00:21:06.646 }, 00:21:06.646 { 00:21:06.646 "name": "BaseBdev4", 00:21:06.646 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:21:06.646 "is_configured": true, 00:21:06.646 "data_offset": 2048, 00:21:06.646 "data_size": 63488 00:21:06.646 } 00:21:06.646 ] 00:21:06.646 }' 00:21:06.646 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.646 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:07.299 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:07.299 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.300 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:07.300 [2024-10-15 09:20:50.908888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:07.300 [2024-10-15 09:20:50.908981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.300 [2024-10-15 09:20:50.909023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:21:07.300 [2024-10-15 09:20:50.909044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.300 [2024-10-15 09:20:50.909771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.300 [2024-10-15 09:20:50.909813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:07.300 [2024-10-15 09:20:50.909949] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:07.300 [2024-10-15 09:20:50.909977] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:07.300 [2024-10-15 09:20:50.909993] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:07.300 [2024-10-15 09:20:50.910027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:07.300 [2024-10-15 09:20:50.924890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:21:07.300 spare 00:21:07.300 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.300 09:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:07.300 [2024-10-15 09:20:50.927583] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:08.236 09:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:08.236 09:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:08.236 09:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:08.236 09:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:08.236 09:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:08.236 09:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.236 09:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.236 09:20:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.236 09:20:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:08.236 09:20:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.236 09:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:08.236 "name": "raid_bdev1", 00:21:08.236 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:21:08.236 "strip_size_kb": 0, 00:21:08.236 "state": "online", 00:21:08.236 "raid_level": "raid1", 00:21:08.236 "superblock": true, 00:21:08.236 "num_base_bdevs": 4, 00:21:08.236 "num_base_bdevs_discovered": 3, 00:21:08.236 "num_base_bdevs_operational": 3, 00:21:08.236 "process": { 00:21:08.236 "type": "rebuild", 00:21:08.236 "target": "spare", 00:21:08.236 "progress": { 00:21:08.236 "blocks": 20480, 00:21:08.236 "percent": 32 00:21:08.236 } 00:21:08.236 }, 00:21:08.236 "base_bdevs_list": [ 00:21:08.236 { 00:21:08.236 "name": "spare", 00:21:08.236 "uuid": "cebbc231-35e1-570a-a230-53c9942a033f", 00:21:08.236 "is_configured": true, 00:21:08.236 "data_offset": 2048, 00:21:08.236 "data_size": 63488 00:21:08.236 }, 00:21:08.236 { 00:21:08.236 "name": null, 00:21:08.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.236 "is_configured": false, 00:21:08.236 "data_offset": 2048, 00:21:08.236 "data_size": 63488 00:21:08.236 }, 00:21:08.236 { 00:21:08.236 "name": "BaseBdev3", 00:21:08.236 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:21:08.236 "is_configured": true, 00:21:08.236 "data_offset": 2048, 00:21:08.236 "data_size": 63488 00:21:08.236 }, 00:21:08.236 { 00:21:08.236 "name": "BaseBdev4", 00:21:08.236 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:21:08.236 "is_configured": true, 00:21:08.236 "data_offset": 2048, 00:21:08.236 "data_size": 63488 00:21:08.236 } 00:21:08.236 ] 00:21:08.236 }' 00:21:08.236 09:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:08.236 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:08.236 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:08.236 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:08.236 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:08.236 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.236 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:08.236 [2024-10-15 09:20:52.093807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:08.236 [2024-10-15 09:20:52.139483] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:08.236 [2024-10-15 09:20:52.139565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.236 [2024-10-15 09:20:52.139596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:08.236 [2024-10-15 09:20:52.139609] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.495 "name": "raid_bdev1", 00:21:08.495 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:21:08.495 "strip_size_kb": 0, 00:21:08.495 "state": "online", 00:21:08.495 "raid_level": "raid1", 00:21:08.495 "superblock": true, 00:21:08.495 "num_base_bdevs": 4, 00:21:08.495 "num_base_bdevs_discovered": 2, 00:21:08.495 "num_base_bdevs_operational": 2, 00:21:08.495 "base_bdevs_list": [ 00:21:08.495 { 00:21:08.495 "name": null, 00:21:08.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.495 "is_configured": false, 00:21:08.495 "data_offset": 0, 00:21:08.495 "data_size": 63488 00:21:08.495 }, 00:21:08.495 { 00:21:08.495 "name": null, 00:21:08.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.495 "is_configured": false, 00:21:08.495 "data_offset": 2048, 00:21:08.495 "data_size": 63488 00:21:08.495 }, 00:21:08.495 { 00:21:08.495 "name": "BaseBdev3", 00:21:08.495 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:21:08.495 "is_configured": true, 00:21:08.495 "data_offset": 2048, 00:21:08.495 "data_size": 63488 00:21:08.495 }, 00:21:08.495 { 00:21:08.495 "name": "BaseBdev4", 00:21:08.495 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:21:08.495 "is_configured": true, 00:21:08.495 "data_offset": 2048, 00:21:08.495 "data_size": 63488 00:21:08.495 } 00:21:08.495 ] 00:21:08.495 }' 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.495 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:09.063 "name": "raid_bdev1", 00:21:09.063 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:21:09.063 "strip_size_kb": 0, 00:21:09.063 "state": "online", 00:21:09.063 "raid_level": "raid1", 00:21:09.063 "superblock": true, 00:21:09.063 "num_base_bdevs": 4, 00:21:09.063 "num_base_bdevs_discovered": 2, 00:21:09.063 "num_base_bdevs_operational": 2, 00:21:09.063 "base_bdevs_list": [ 00:21:09.063 { 00:21:09.063 "name": null, 00:21:09.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.063 "is_configured": false, 00:21:09.063 "data_offset": 0, 00:21:09.063 "data_size": 63488 00:21:09.063 }, 00:21:09.063 { 00:21:09.063 "name": null, 00:21:09.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.063 "is_configured": false, 00:21:09.063 "data_offset": 2048, 00:21:09.063 "data_size": 63488 00:21:09.063 }, 00:21:09.063 { 00:21:09.063 "name": "BaseBdev3", 00:21:09.063 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:21:09.063 "is_configured": true, 00:21:09.063 "data_offset": 2048, 00:21:09.063 "data_size": 63488 00:21:09.063 }, 00:21:09.063 { 00:21:09.063 "name": "BaseBdev4", 00:21:09.063 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:21:09.063 "is_configured": true, 00:21:09.063 "data_offset": 2048, 00:21:09.063 "data_size": 63488 00:21:09.063 } 00:21:09.063 ] 00:21:09.063 }' 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:09.063 [2024-10-15 09:20:52.893379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:09.063 [2024-10-15 09:20:52.893455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.063 [2024-10-15 09:20:52.893523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:21:09.063 [2024-10-15 09:20:52.893540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.063 [2024-10-15 09:20:52.894205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.063 [2024-10-15 09:20:52.894231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:09.063 [2024-10-15 09:20:52.894370] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:09.063 [2024-10-15 09:20:52.894395] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:09.063 [2024-10-15 09:20:52.894413] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:09.063 [2024-10-15 09:20:52.894428] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:09.063 BaseBdev1 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.063 09:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:10.001 09:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:10.001 09:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:10.001 09:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:10.001 09:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:10.001 09:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:10.001 09:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:10.001 09:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.001 09:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.001 09:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.001 09:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.001 09:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.001 09:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.001 09:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.001 09:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:10.297 09:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.297 09:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.297 "name": "raid_bdev1", 00:21:10.297 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:21:10.297 "strip_size_kb": 0, 00:21:10.297 "state": "online", 00:21:10.297 "raid_level": "raid1", 00:21:10.297 "superblock": true, 00:21:10.297 "num_base_bdevs": 4, 00:21:10.297 "num_base_bdevs_discovered": 2, 00:21:10.297 "num_base_bdevs_operational": 2, 00:21:10.297 "base_bdevs_list": [ 00:21:10.297 { 00:21:10.297 "name": null, 00:21:10.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.297 "is_configured": false, 00:21:10.297 "data_offset": 0, 00:21:10.297 "data_size": 63488 00:21:10.297 }, 00:21:10.297 { 00:21:10.297 "name": null, 00:21:10.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.297 "is_configured": false, 00:21:10.297 "data_offset": 2048, 00:21:10.297 "data_size": 63488 00:21:10.297 }, 00:21:10.297 { 00:21:10.297 "name": "BaseBdev3", 00:21:10.297 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:21:10.297 "is_configured": true, 00:21:10.297 "data_offset": 2048, 00:21:10.297 "data_size": 63488 00:21:10.297 }, 00:21:10.297 { 00:21:10.297 "name": "BaseBdev4", 00:21:10.297 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:21:10.297 "is_configured": true, 00:21:10.297 "data_offset": 2048, 00:21:10.297 "data_size": 63488 00:21:10.297 } 00:21:10.297 ] 00:21:10.297 }' 00:21:10.297 09:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.297 09:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:10.557 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:10.557 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.557 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:10.557 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:10.557 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.557 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.557 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.557 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.557 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:10.557 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.557 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.557 "name": "raid_bdev1", 00:21:10.557 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:21:10.557 "strip_size_kb": 0, 00:21:10.557 "state": "online", 00:21:10.557 "raid_level": "raid1", 00:21:10.557 "superblock": true, 00:21:10.557 "num_base_bdevs": 4, 00:21:10.557 "num_base_bdevs_discovered": 2, 00:21:10.557 "num_base_bdevs_operational": 2, 00:21:10.557 "base_bdevs_list": [ 00:21:10.557 { 00:21:10.557 "name": null, 00:21:10.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.557 "is_configured": false, 00:21:10.557 "data_offset": 0, 00:21:10.557 "data_size": 63488 00:21:10.557 }, 00:21:10.557 { 00:21:10.557 "name": null, 00:21:10.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.557 "is_configured": false, 00:21:10.557 "data_offset": 2048, 00:21:10.557 "data_size": 63488 00:21:10.557 }, 00:21:10.557 { 00:21:10.557 "name": "BaseBdev3", 00:21:10.557 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:21:10.557 "is_configured": true, 00:21:10.557 "data_offset": 2048, 00:21:10.557 "data_size": 63488 00:21:10.557 }, 00:21:10.557 { 00:21:10.557 "name": "BaseBdev4", 00:21:10.557 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:21:10.557 "is_configured": true, 00:21:10.557 "data_offset": 2048, 00:21:10.557 "data_size": 63488 00:21:10.557 } 00:21:10.557 ] 00:21:10.557 }' 00:21:10.557 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:10.816 [2024-10-15 09:20:54.578375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:10.816 [2024-10-15 09:20:54.578658] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:10.816 [2024-10-15 09:20:54.578687] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:10.816 request: 00:21:10.816 { 00:21:10.816 "base_bdev": "BaseBdev1", 00:21:10.816 "raid_bdev": "raid_bdev1", 00:21:10.816 "method": "bdev_raid_add_base_bdev", 00:21:10.816 "req_id": 1 00:21:10.816 } 00:21:10.816 Got JSON-RPC error response 00:21:10.816 response: 00:21:10.816 { 00:21:10.816 "code": -22, 00:21:10.816 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:10.816 } 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:10.816 09:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:11.752 09:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:11.752 09:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:11.752 09:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:11.752 09:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:11.752 09:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:11.752 09:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:11.752 09:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.752 09:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.752 09:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.752 09:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.752 09:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.752 09:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.752 09:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.752 09:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:11.752 09:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.752 09:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.752 "name": "raid_bdev1", 00:21:11.752 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:21:11.752 "strip_size_kb": 0, 00:21:11.752 "state": "online", 00:21:11.752 "raid_level": "raid1", 00:21:11.752 "superblock": true, 00:21:11.752 "num_base_bdevs": 4, 00:21:11.752 "num_base_bdevs_discovered": 2, 00:21:11.752 "num_base_bdevs_operational": 2, 00:21:11.752 "base_bdevs_list": [ 00:21:11.752 { 00:21:11.752 "name": null, 00:21:11.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.752 "is_configured": false, 00:21:11.752 "data_offset": 0, 00:21:11.752 "data_size": 63488 00:21:11.752 }, 00:21:11.752 { 00:21:11.752 "name": null, 00:21:11.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.752 "is_configured": false, 00:21:11.752 "data_offset": 2048, 00:21:11.752 "data_size": 63488 00:21:11.752 }, 00:21:11.752 { 00:21:11.752 "name": "BaseBdev3", 00:21:11.752 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:21:11.752 "is_configured": true, 00:21:11.752 "data_offset": 2048, 00:21:11.752 "data_size": 63488 00:21:11.752 }, 00:21:11.752 { 00:21:11.752 "name": "BaseBdev4", 00:21:11.752 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:21:11.752 "is_configured": true, 00:21:11.752 "data_offset": 2048, 00:21:11.752 "data_size": 63488 00:21:11.752 } 00:21:11.752 ] 00:21:11.752 }' 00:21:11.752 09:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.752 09:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:12.320 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:12.320 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:12.320 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:12.320 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:12.320 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:12.320 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.320 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.320 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.320 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:12.320 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.320 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:12.320 "name": "raid_bdev1", 00:21:12.320 "uuid": "a5b05d6b-bb8a-455c-8fbb-e5d6f645e717", 00:21:12.320 "strip_size_kb": 0, 00:21:12.320 "state": "online", 00:21:12.320 "raid_level": "raid1", 00:21:12.320 "superblock": true, 00:21:12.320 "num_base_bdevs": 4, 00:21:12.320 "num_base_bdevs_discovered": 2, 00:21:12.320 "num_base_bdevs_operational": 2, 00:21:12.320 "base_bdevs_list": [ 00:21:12.320 { 00:21:12.320 "name": null, 00:21:12.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.320 "is_configured": false, 00:21:12.320 "data_offset": 0, 00:21:12.320 "data_size": 63488 00:21:12.320 }, 00:21:12.320 { 00:21:12.320 "name": null, 00:21:12.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.320 "is_configured": false, 00:21:12.320 "data_offset": 2048, 00:21:12.320 "data_size": 63488 00:21:12.320 }, 00:21:12.320 { 00:21:12.320 "name": "BaseBdev3", 00:21:12.320 "uuid": "790a43be-2e94-5138-b290-885679a4d33c", 00:21:12.320 "is_configured": true, 00:21:12.320 "data_offset": 2048, 00:21:12.320 "data_size": 63488 00:21:12.320 }, 00:21:12.320 { 00:21:12.320 "name": "BaseBdev4", 00:21:12.320 "uuid": "04a58f9c-81cb-51e0-9c6d-405c6894c8f9", 00:21:12.320 "is_configured": true, 00:21:12.320 "data_offset": 2048, 00:21:12.320 "data_size": 63488 00:21:12.320 } 00:21:12.320 ] 00:21:12.320 }' 00:21:12.320 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:12.320 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:12.320 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:12.580 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:12.580 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79782 00:21:12.580 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 79782 ']' 00:21:12.580 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 79782 00:21:12.580 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:21:12.580 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:12.580 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79782 00:21:12.580 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:12.580 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:12.580 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79782' 00:21:12.580 killing process with pid 79782 00:21:12.580 Received shutdown signal, test time was about 19.454372 seconds 00:21:12.580 00:21:12.580 Latency(us) 00:21:12.580 [2024-10-15T09:20:56.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.580 [2024-10-15T09:20:56.508Z] =================================================================================================================== 00:21:12.580 [2024-10-15T09:20:56.508Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.580 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 79782 00:21:12.580 09:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 79782 00:21:12.580 [2024-10-15 09:20:56.312217] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:12.580 [2024-10-15 09:20:56.312412] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:12.580 [2024-10-15 09:20:56.312517] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:12.580 [2024-10-15 09:20:56.312551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:12.839 [2024-10-15 09:20:56.733701] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:14.304 09:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:21:14.304 00:21:14.304 real 0m23.388s 00:21:14.304 user 0m31.740s 00:21:14.304 sys 0m2.483s 00:21:14.304 09:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:14.304 09:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.304 ************************************ 00:21:14.304 END TEST raid_rebuild_test_sb_io 00:21:14.304 ************************************ 00:21:14.304 09:20:57 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:21:14.304 09:20:57 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:21:14.304 09:20:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:14.304 09:20:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:14.304 09:20:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:14.304 ************************************ 00:21:14.304 START TEST raid5f_state_function_test 00:21:14.304 ************************************ 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80528 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:14.304 Process raid pid: 80528 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80528' 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80528 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80528 ']' 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:14.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:14.304 09:20:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.304 [2024-10-15 09:20:58.131174] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:21:14.304 [2024-10-15 09:20:58.131372] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.563 [2024-10-15 09:20:58.315010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.823 [2024-10-15 09:20:58.524191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.082 [2024-10-15 09:20:58.811559] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:15.082 [2024-10-15 09:20:58.811605] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.342 [2024-10-15 09:20:59.075955] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:15.342 [2024-10-15 09:20:59.076036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:15.342 [2024-10-15 09:20:59.076052] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:15.342 [2024-10-15 09:20:59.076068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:15.342 [2024-10-15 09:20:59.076078] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:15.342 [2024-10-15 09:20:59.076092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.342 "name": "Existed_Raid", 00:21:15.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.342 "strip_size_kb": 64, 00:21:15.342 "state": "configuring", 00:21:15.342 "raid_level": "raid5f", 00:21:15.342 "superblock": false, 00:21:15.342 "num_base_bdevs": 3, 00:21:15.342 "num_base_bdevs_discovered": 0, 00:21:15.342 "num_base_bdevs_operational": 3, 00:21:15.342 "base_bdevs_list": [ 00:21:15.342 { 00:21:15.342 "name": "BaseBdev1", 00:21:15.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.342 "is_configured": false, 00:21:15.342 "data_offset": 0, 00:21:15.342 "data_size": 0 00:21:15.342 }, 00:21:15.342 { 00:21:15.342 "name": "BaseBdev2", 00:21:15.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.342 "is_configured": false, 00:21:15.342 "data_offset": 0, 00:21:15.342 "data_size": 0 00:21:15.342 }, 00:21:15.342 { 00:21:15.342 "name": "BaseBdev3", 00:21:15.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.342 "is_configured": false, 00:21:15.342 "data_offset": 0, 00:21:15.342 "data_size": 0 00:21:15.342 } 00:21:15.342 ] 00:21:15.342 }' 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.342 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.911 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:15.911 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.911 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.911 [2024-10-15 09:20:59.596035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:15.911 [2024-10-15 09:20:59.596102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:15.911 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.911 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:15.911 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.911 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.911 [2024-10-15 09:20:59.604005] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:15.911 [2024-10-15 09:20:59.604078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:15.911 [2024-10-15 09:20:59.604094] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:15.911 [2024-10-15 09:20:59.604110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:15.911 [2024-10-15 09:20:59.604120] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:15.911 [2024-10-15 09:20:59.604149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:15.911 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.912 [2024-10-15 09:20:59.670936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:15.912 BaseBdev1 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.912 [ 00:21:15.912 { 00:21:15.912 "name": "BaseBdev1", 00:21:15.912 "aliases": [ 00:21:15.912 "8c97952e-736f-453f-83e9-842259e8dffb" 00:21:15.912 ], 00:21:15.912 "product_name": "Malloc disk", 00:21:15.912 "block_size": 512, 00:21:15.912 "num_blocks": 65536, 00:21:15.912 "uuid": "8c97952e-736f-453f-83e9-842259e8dffb", 00:21:15.912 "assigned_rate_limits": { 00:21:15.912 "rw_ios_per_sec": 0, 00:21:15.912 "rw_mbytes_per_sec": 0, 00:21:15.912 "r_mbytes_per_sec": 0, 00:21:15.912 "w_mbytes_per_sec": 0 00:21:15.912 }, 00:21:15.912 "claimed": true, 00:21:15.912 "claim_type": "exclusive_write", 00:21:15.912 "zoned": false, 00:21:15.912 "supported_io_types": { 00:21:15.912 "read": true, 00:21:15.912 "write": true, 00:21:15.912 "unmap": true, 00:21:15.912 "flush": true, 00:21:15.912 "reset": true, 00:21:15.912 "nvme_admin": false, 00:21:15.912 "nvme_io": false, 00:21:15.912 "nvme_io_md": false, 00:21:15.912 "write_zeroes": true, 00:21:15.912 "zcopy": true, 00:21:15.912 "get_zone_info": false, 00:21:15.912 "zone_management": false, 00:21:15.912 "zone_append": false, 00:21:15.912 "compare": false, 00:21:15.912 "compare_and_write": false, 00:21:15.912 "abort": true, 00:21:15.912 "seek_hole": false, 00:21:15.912 "seek_data": false, 00:21:15.912 "copy": true, 00:21:15.912 "nvme_iov_md": false 00:21:15.912 }, 00:21:15.912 "memory_domains": [ 00:21:15.912 { 00:21:15.912 "dma_device_id": "system", 00:21:15.912 "dma_device_type": 1 00:21:15.912 }, 00:21:15.912 { 00:21:15.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.912 "dma_device_type": 2 00:21:15.912 } 00:21:15.912 ], 00:21:15.912 "driver_specific": {} 00:21:15.912 } 00:21:15.912 ] 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.912 "name": "Existed_Raid", 00:21:15.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.912 "strip_size_kb": 64, 00:21:15.912 "state": "configuring", 00:21:15.912 "raid_level": "raid5f", 00:21:15.912 "superblock": false, 00:21:15.912 "num_base_bdevs": 3, 00:21:15.912 "num_base_bdevs_discovered": 1, 00:21:15.912 "num_base_bdevs_operational": 3, 00:21:15.912 "base_bdevs_list": [ 00:21:15.912 { 00:21:15.912 "name": "BaseBdev1", 00:21:15.912 "uuid": "8c97952e-736f-453f-83e9-842259e8dffb", 00:21:15.912 "is_configured": true, 00:21:15.912 "data_offset": 0, 00:21:15.912 "data_size": 65536 00:21:15.912 }, 00:21:15.912 { 00:21:15.912 "name": "BaseBdev2", 00:21:15.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.912 "is_configured": false, 00:21:15.912 "data_offset": 0, 00:21:15.912 "data_size": 0 00:21:15.912 }, 00:21:15.912 { 00:21:15.912 "name": "BaseBdev3", 00:21:15.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.912 "is_configured": false, 00:21:15.912 "data_offset": 0, 00:21:15.912 "data_size": 0 00:21:15.912 } 00:21:15.912 ] 00:21:15.912 }' 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.912 09:20:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.480 [2024-10-15 09:21:00.223090] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:16.480 [2024-10-15 09:21:00.223180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.480 [2024-10-15 09:21:00.231176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:16.480 [2024-10-15 09:21:00.233741] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:16.480 [2024-10-15 09:21:00.233794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:16.480 [2024-10-15 09:21:00.233810] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:16.480 [2024-10-15 09:21:00.233826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.480 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.481 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.481 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.481 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.481 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.481 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.481 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.481 "name": "Existed_Raid", 00:21:16.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.481 "strip_size_kb": 64, 00:21:16.481 "state": "configuring", 00:21:16.481 "raid_level": "raid5f", 00:21:16.481 "superblock": false, 00:21:16.481 "num_base_bdevs": 3, 00:21:16.481 "num_base_bdevs_discovered": 1, 00:21:16.481 "num_base_bdevs_operational": 3, 00:21:16.481 "base_bdevs_list": [ 00:21:16.481 { 00:21:16.481 "name": "BaseBdev1", 00:21:16.481 "uuid": "8c97952e-736f-453f-83e9-842259e8dffb", 00:21:16.481 "is_configured": true, 00:21:16.481 "data_offset": 0, 00:21:16.481 "data_size": 65536 00:21:16.481 }, 00:21:16.481 { 00:21:16.481 "name": "BaseBdev2", 00:21:16.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.481 "is_configured": false, 00:21:16.481 "data_offset": 0, 00:21:16.481 "data_size": 0 00:21:16.481 }, 00:21:16.481 { 00:21:16.481 "name": "BaseBdev3", 00:21:16.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.481 "is_configured": false, 00:21:16.481 "data_offset": 0, 00:21:16.481 "data_size": 0 00:21:16.481 } 00:21:16.481 ] 00:21:16.481 }' 00:21:16.481 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.481 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.049 [2024-10-15 09:21:00.799041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:17.049 BaseBdev2 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.049 [ 00:21:17.049 { 00:21:17.049 "name": "BaseBdev2", 00:21:17.049 "aliases": [ 00:21:17.049 "ac196edc-7a20-48c1-b275-f7b27a7129e5" 00:21:17.049 ], 00:21:17.049 "product_name": "Malloc disk", 00:21:17.049 "block_size": 512, 00:21:17.049 "num_blocks": 65536, 00:21:17.049 "uuid": "ac196edc-7a20-48c1-b275-f7b27a7129e5", 00:21:17.049 "assigned_rate_limits": { 00:21:17.049 "rw_ios_per_sec": 0, 00:21:17.049 "rw_mbytes_per_sec": 0, 00:21:17.049 "r_mbytes_per_sec": 0, 00:21:17.049 "w_mbytes_per_sec": 0 00:21:17.049 }, 00:21:17.049 "claimed": true, 00:21:17.049 "claim_type": "exclusive_write", 00:21:17.049 "zoned": false, 00:21:17.049 "supported_io_types": { 00:21:17.049 "read": true, 00:21:17.049 "write": true, 00:21:17.049 "unmap": true, 00:21:17.049 "flush": true, 00:21:17.049 "reset": true, 00:21:17.049 "nvme_admin": false, 00:21:17.049 "nvme_io": false, 00:21:17.049 "nvme_io_md": false, 00:21:17.049 "write_zeroes": true, 00:21:17.049 "zcopy": true, 00:21:17.049 "get_zone_info": false, 00:21:17.049 "zone_management": false, 00:21:17.049 "zone_append": false, 00:21:17.049 "compare": false, 00:21:17.049 "compare_and_write": false, 00:21:17.049 "abort": true, 00:21:17.049 "seek_hole": false, 00:21:17.049 "seek_data": false, 00:21:17.049 "copy": true, 00:21:17.049 "nvme_iov_md": false 00:21:17.049 }, 00:21:17.049 "memory_domains": [ 00:21:17.049 { 00:21:17.049 "dma_device_id": "system", 00:21:17.049 "dma_device_type": 1 00:21:17.049 }, 00:21:17.049 { 00:21:17.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.049 "dma_device_type": 2 00:21:17.049 } 00:21:17.049 ], 00:21:17.049 "driver_specific": {} 00:21:17.049 } 00:21:17.049 ] 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.049 "name": "Existed_Raid", 00:21:17.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.049 "strip_size_kb": 64, 00:21:17.049 "state": "configuring", 00:21:17.049 "raid_level": "raid5f", 00:21:17.049 "superblock": false, 00:21:17.049 "num_base_bdevs": 3, 00:21:17.049 "num_base_bdevs_discovered": 2, 00:21:17.049 "num_base_bdevs_operational": 3, 00:21:17.049 "base_bdevs_list": [ 00:21:17.049 { 00:21:17.049 "name": "BaseBdev1", 00:21:17.049 "uuid": "8c97952e-736f-453f-83e9-842259e8dffb", 00:21:17.049 "is_configured": true, 00:21:17.049 "data_offset": 0, 00:21:17.049 "data_size": 65536 00:21:17.049 }, 00:21:17.049 { 00:21:17.049 "name": "BaseBdev2", 00:21:17.049 "uuid": "ac196edc-7a20-48c1-b275-f7b27a7129e5", 00:21:17.049 "is_configured": true, 00:21:17.049 "data_offset": 0, 00:21:17.049 "data_size": 65536 00:21:17.049 }, 00:21:17.049 { 00:21:17.049 "name": "BaseBdev3", 00:21:17.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.049 "is_configured": false, 00:21:17.049 "data_offset": 0, 00:21:17.049 "data_size": 0 00:21:17.049 } 00:21:17.049 ] 00:21:17.049 }' 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.049 09:21:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.617 [2024-10-15 09:21:01.409204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:17.617 [2024-10-15 09:21:01.409334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:17.617 [2024-10-15 09:21:01.409376] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:17.617 [2024-10-15 09:21:01.409864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:17.617 [2024-10-15 09:21:01.416888] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:17.617 [2024-10-15 09:21:01.416949] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:17.617 [2024-10-15 09:21:01.417528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:17.617 BaseBdev3 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.617 [ 00:21:17.617 { 00:21:17.617 "name": "BaseBdev3", 00:21:17.617 "aliases": [ 00:21:17.617 "4462e552-b1c2-4d68-90db-4b86e3ae93d2" 00:21:17.617 ], 00:21:17.617 "product_name": "Malloc disk", 00:21:17.617 "block_size": 512, 00:21:17.617 "num_blocks": 65536, 00:21:17.617 "uuid": "4462e552-b1c2-4d68-90db-4b86e3ae93d2", 00:21:17.617 "assigned_rate_limits": { 00:21:17.617 "rw_ios_per_sec": 0, 00:21:17.617 "rw_mbytes_per_sec": 0, 00:21:17.617 "r_mbytes_per_sec": 0, 00:21:17.617 "w_mbytes_per_sec": 0 00:21:17.617 }, 00:21:17.617 "claimed": true, 00:21:17.617 "claim_type": "exclusive_write", 00:21:17.617 "zoned": false, 00:21:17.617 "supported_io_types": { 00:21:17.617 "read": true, 00:21:17.617 "write": true, 00:21:17.617 "unmap": true, 00:21:17.617 "flush": true, 00:21:17.617 "reset": true, 00:21:17.617 "nvme_admin": false, 00:21:17.617 "nvme_io": false, 00:21:17.617 "nvme_io_md": false, 00:21:17.617 "write_zeroes": true, 00:21:17.617 "zcopy": true, 00:21:17.617 "get_zone_info": false, 00:21:17.617 "zone_management": false, 00:21:17.617 "zone_append": false, 00:21:17.617 "compare": false, 00:21:17.617 "compare_and_write": false, 00:21:17.617 "abort": true, 00:21:17.617 "seek_hole": false, 00:21:17.617 "seek_data": false, 00:21:17.617 "copy": true, 00:21:17.617 "nvme_iov_md": false 00:21:17.617 }, 00:21:17.617 "memory_domains": [ 00:21:17.617 { 00:21:17.617 "dma_device_id": "system", 00:21:17.617 "dma_device_type": 1 00:21:17.617 }, 00:21:17.617 { 00:21:17.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.617 "dma_device_type": 2 00:21:17.617 } 00:21:17.617 ], 00:21:17.617 "driver_specific": {} 00:21:17.617 } 00:21:17.617 ] 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:17.617 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.618 "name": "Existed_Raid", 00:21:17.618 "uuid": "d4ffe902-3dbc-4d35-ace8-8e16460f1ff9", 00:21:17.618 "strip_size_kb": 64, 00:21:17.618 "state": "online", 00:21:17.618 "raid_level": "raid5f", 00:21:17.618 "superblock": false, 00:21:17.618 "num_base_bdevs": 3, 00:21:17.618 "num_base_bdevs_discovered": 3, 00:21:17.618 "num_base_bdevs_operational": 3, 00:21:17.618 "base_bdevs_list": [ 00:21:17.618 { 00:21:17.618 "name": "BaseBdev1", 00:21:17.618 "uuid": "8c97952e-736f-453f-83e9-842259e8dffb", 00:21:17.618 "is_configured": true, 00:21:17.618 "data_offset": 0, 00:21:17.618 "data_size": 65536 00:21:17.618 }, 00:21:17.618 { 00:21:17.618 "name": "BaseBdev2", 00:21:17.618 "uuid": "ac196edc-7a20-48c1-b275-f7b27a7129e5", 00:21:17.618 "is_configured": true, 00:21:17.618 "data_offset": 0, 00:21:17.618 "data_size": 65536 00:21:17.618 }, 00:21:17.618 { 00:21:17.618 "name": "BaseBdev3", 00:21:17.618 "uuid": "4462e552-b1c2-4d68-90db-4b86e3ae93d2", 00:21:17.618 "is_configured": true, 00:21:17.618 "data_offset": 0, 00:21:17.618 "data_size": 65536 00:21:17.618 } 00:21:17.618 ] 00:21:17.618 }' 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.618 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.186 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:18.186 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:18.186 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:18.186 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:18.186 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:18.186 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:18.186 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:18.186 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:18.186 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.186 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.186 [2024-10-15 09:21:01.941816] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:18.186 09:21:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.186 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:18.186 "name": "Existed_Raid", 00:21:18.186 "aliases": [ 00:21:18.186 "d4ffe902-3dbc-4d35-ace8-8e16460f1ff9" 00:21:18.186 ], 00:21:18.186 "product_name": "Raid Volume", 00:21:18.186 "block_size": 512, 00:21:18.186 "num_blocks": 131072, 00:21:18.186 "uuid": "d4ffe902-3dbc-4d35-ace8-8e16460f1ff9", 00:21:18.186 "assigned_rate_limits": { 00:21:18.186 "rw_ios_per_sec": 0, 00:21:18.186 "rw_mbytes_per_sec": 0, 00:21:18.186 "r_mbytes_per_sec": 0, 00:21:18.186 "w_mbytes_per_sec": 0 00:21:18.186 }, 00:21:18.186 "claimed": false, 00:21:18.186 "zoned": false, 00:21:18.186 "supported_io_types": { 00:21:18.186 "read": true, 00:21:18.186 "write": true, 00:21:18.186 "unmap": false, 00:21:18.186 "flush": false, 00:21:18.186 "reset": true, 00:21:18.186 "nvme_admin": false, 00:21:18.186 "nvme_io": false, 00:21:18.186 "nvme_io_md": false, 00:21:18.186 "write_zeroes": true, 00:21:18.186 "zcopy": false, 00:21:18.186 "get_zone_info": false, 00:21:18.186 "zone_management": false, 00:21:18.186 "zone_append": false, 00:21:18.186 "compare": false, 00:21:18.186 "compare_and_write": false, 00:21:18.186 "abort": false, 00:21:18.186 "seek_hole": false, 00:21:18.186 "seek_data": false, 00:21:18.186 "copy": false, 00:21:18.186 "nvme_iov_md": false 00:21:18.186 }, 00:21:18.186 "driver_specific": { 00:21:18.186 "raid": { 00:21:18.186 "uuid": "d4ffe902-3dbc-4d35-ace8-8e16460f1ff9", 00:21:18.186 "strip_size_kb": 64, 00:21:18.186 "state": "online", 00:21:18.186 "raid_level": "raid5f", 00:21:18.186 "superblock": false, 00:21:18.186 "num_base_bdevs": 3, 00:21:18.186 "num_base_bdevs_discovered": 3, 00:21:18.186 "num_base_bdevs_operational": 3, 00:21:18.186 "base_bdevs_list": [ 00:21:18.186 { 00:21:18.186 "name": "BaseBdev1", 00:21:18.186 "uuid": "8c97952e-736f-453f-83e9-842259e8dffb", 00:21:18.186 "is_configured": true, 00:21:18.186 "data_offset": 0, 00:21:18.186 "data_size": 65536 00:21:18.186 }, 00:21:18.186 { 00:21:18.186 "name": "BaseBdev2", 00:21:18.186 "uuid": "ac196edc-7a20-48c1-b275-f7b27a7129e5", 00:21:18.186 "is_configured": true, 00:21:18.186 "data_offset": 0, 00:21:18.186 "data_size": 65536 00:21:18.186 }, 00:21:18.186 { 00:21:18.186 "name": "BaseBdev3", 00:21:18.186 "uuid": "4462e552-b1c2-4d68-90db-4b86e3ae93d2", 00:21:18.186 "is_configured": true, 00:21:18.186 "data_offset": 0, 00:21:18.186 "data_size": 65536 00:21:18.186 } 00:21:18.186 ] 00:21:18.186 } 00:21:18.186 } 00:21:18.186 }' 00:21:18.186 09:21:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:18.186 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:18.186 BaseBdev2 00:21:18.186 BaseBdev3' 00:21:18.186 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.186 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:18.186 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:18.186 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:18.186 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.186 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.186 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.445 [2024-10-15 09:21:02.257710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.445 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.704 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.704 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.704 "name": "Existed_Raid", 00:21:18.704 "uuid": "d4ffe902-3dbc-4d35-ace8-8e16460f1ff9", 00:21:18.704 "strip_size_kb": 64, 00:21:18.704 "state": "online", 00:21:18.704 "raid_level": "raid5f", 00:21:18.704 "superblock": false, 00:21:18.704 "num_base_bdevs": 3, 00:21:18.704 "num_base_bdevs_discovered": 2, 00:21:18.704 "num_base_bdevs_operational": 2, 00:21:18.704 "base_bdevs_list": [ 00:21:18.704 { 00:21:18.704 "name": null, 00:21:18.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.704 "is_configured": false, 00:21:18.704 "data_offset": 0, 00:21:18.704 "data_size": 65536 00:21:18.704 }, 00:21:18.704 { 00:21:18.704 "name": "BaseBdev2", 00:21:18.704 "uuid": "ac196edc-7a20-48c1-b275-f7b27a7129e5", 00:21:18.704 "is_configured": true, 00:21:18.704 "data_offset": 0, 00:21:18.704 "data_size": 65536 00:21:18.704 }, 00:21:18.704 { 00:21:18.704 "name": "BaseBdev3", 00:21:18.704 "uuid": "4462e552-b1c2-4d68-90db-4b86e3ae93d2", 00:21:18.704 "is_configured": true, 00:21:18.704 "data_offset": 0, 00:21:18.704 "data_size": 65536 00:21:18.704 } 00:21:18.704 ] 00:21:18.704 }' 00:21:18.704 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.704 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.962 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:18.962 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:18.962 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.962 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:18.962 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.962 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.962 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.221 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:19.221 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:19.221 09:21:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:19.221 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.221 09:21:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.221 [2024-10-15 09:21:02.922047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:19.221 [2024-10-15 09:21:02.922214] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:19.221 [2024-10-15 09:21:03.014715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.221 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.222 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:19.222 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:19.222 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.222 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.222 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.222 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:19.222 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.222 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:19.222 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:19.222 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:19.222 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.222 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.222 [2024-10-15 09:21:03.074815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:19.222 [2024-10-15 09:21:03.075099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.481 BaseBdev2 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.481 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.481 [ 00:21:19.481 { 00:21:19.481 "name": "BaseBdev2", 00:21:19.481 "aliases": [ 00:21:19.481 "562b286d-1419-4a66-baa6-b79c42c4d35f" 00:21:19.481 ], 00:21:19.481 "product_name": "Malloc disk", 00:21:19.481 "block_size": 512, 00:21:19.481 "num_blocks": 65536, 00:21:19.481 "uuid": "562b286d-1419-4a66-baa6-b79c42c4d35f", 00:21:19.481 "assigned_rate_limits": { 00:21:19.481 "rw_ios_per_sec": 0, 00:21:19.481 "rw_mbytes_per_sec": 0, 00:21:19.481 "r_mbytes_per_sec": 0, 00:21:19.481 "w_mbytes_per_sec": 0 00:21:19.481 }, 00:21:19.481 "claimed": false, 00:21:19.481 "zoned": false, 00:21:19.481 "supported_io_types": { 00:21:19.481 "read": true, 00:21:19.481 "write": true, 00:21:19.481 "unmap": true, 00:21:19.481 "flush": true, 00:21:19.481 "reset": true, 00:21:19.481 "nvme_admin": false, 00:21:19.481 "nvme_io": false, 00:21:19.481 "nvme_io_md": false, 00:21:19.481 "write_zeroes": true, 00:21:19.481 "zcopy": true, 00:21:19.481 "get_zone_info": false, 00:21:19.481 "zone_management": false, 00:21:19.481 "zone_append": false, 00:21:19.481 "compare": false, 00:21:19.481 "compare_and_write": false, 00:21:19.481 "abort": true, 00:21:19.481 "seek_hole": false, 00:21:19.481 "seek_data": false, 00:21:19.481 "copy": true, 00:21:19.481 "nvme_iov_md": false 00:21:19.481 }, 00:21:19.481 "memory_domains": [ 00:21:19.481 { 00:21:19.482 "dma_device_id": "system", 00:21:19.482 "dma_device_type": 1 00:21:19.482 }, 00:21:19.482 { 00:21:19.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.482 "dma_device_type": 2 00:21:19.482 } 00:21:19.482 ], 00:21:19.482 "driver_specific": {} 00:21:19.482 } 00:21:19.482 ] 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.482 BaseBdev3 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.482 [ 00:21:19.482 { 00:21:19.482 "name": "BaseBdev3", 00:21:19.482 "aliases": [ 00:21:19.482 "34a3cd28-ec65-4469-902b-ea0aca5870c0" 00:21:19.482 ], 00:21:19.482 "product_name": "Malloc disk", 00:21:19.482 "block_size": 512, 00:21:19.482 "num_blocks": 65536, 00:21:19.482 "uuid": "34a3cd28-ec65-4469-902b-ea0aca5870c0", 00:21:19.482 "assigned_rate_limits": { 00:21:19.482 "rw_ios_per_sec": 0, 00:21:19.482 "rw_mbytes_per_sec": 0, 00:21:19.482 "r_mbytes_per_sec": 0, 00:21:19.482 "w_mbytes_per_sec": 0 00:21:19.482 }, 00:21:19.482 "claimed": false, 00:21:19.482 "zoned": false, 00:21:19.482 "supported_io_types": { 00:21:19.482 "read": true, 00:21:19.482 "write": true, 00:21:19.482 "unmap": true, 00:21:19.482 "flush": true, 00:21:19.482 "reset": true, 00:21:19.482 "nvme_admin": false, 00:21:19.482 "nvme_io": false, 00:21:19.482 "nvme_io_md": false, 00:21:19.482 "write_zeroes": true, 00:21:19.482 "zcopy": true, 00:21:19.482 "get_zone_info": false, 00:21:19.482 "zone_management": false, 00:21:19.482 "zone_append": false, 00:21:19.482 "compare": false, 00:21:19.482 "compare_and_write": false, 00:21:19.482 "abort": true, 00:21:19.482 "seek_hole": false, 00:21:19.482 "seek_data": false, 00:21:19.482 "copy": true, 00:21:19.482 "nvme_iov_md": false 00:21:19.482 }, 00:21:19.482 "memory_domains": [ 00:21:19.482 { 00:21:19.482 "dma_device_id": "system", 00:21:19.482 "dma_device_type": 1 00:21:19.482 }, 00:21:19.482 { 00:21:19.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.482 "dma_device_type": 2 00:21:19.482 } 00:21:19.482 ], 00:21:19.482 "driver_specific": {} 00:21:19.482 } 00:21:19.482 ] 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.482 [2024-10-15 09:21:03.398637] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:19.482 [2024-10-15 09:21:03.398873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:19.482 [2024-10-15 09:21:03.399034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:19.482 [2024-10-15 09:21:03.401738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.482 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.741 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.741 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.741 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.741 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.741 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.741 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.741 "name": "Existed_Raid", 00:21:19.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.741 "strip_size_kb": 64, 00:21:19.741 "state": "configuring", 00:21:19.741 "raid_level": "raid5f", 00:21:19.741 "superblock": false, 00:21:19.741 "num_base_bdevs": 3, 00:21:19.741 "num_base_bdevs_discovered": 2, 00:21:19.741 "num_base_bdevs_operational": 3, 00:21:19.741 "base_bdevs_list": [ 00:21:19.741 { 00:21:19.741 "name": "BaseBdev1", 00:21:19.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.741 "is_configured": false, 00:21:19.741 "data_offset": 0, 00:21:19.741 "data_size": 0 00:21:19.741 }, 00:21:19.741 { 00:21:19.741 "name": "BaseBdev2", 00:21:19.741 "uuid": "562b286d-1419-4a66-baa6-b79c42c4d35f", 00:21:19.741 "is_configured": true, 00:21:19.741 "data_offset": 0, 00:21:19.741 "data_size": 65536 00:21:19.741 }, 00:21:19.741 { 00:21:19.741 "name": "BaseBdev3", 00:21:19.741 "uuid": "34a3cd28-ec65-4469-902b-ea0aca5870c0", 00:21:19.741 "is_configured": true, 00:21:19.741 "data_offset": 0, 00:21:19.741 "data_size": 65536 00:21:19.741 } 00:21:19.741 ] 00:21:19.741 }' 00:21:19.741 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.741 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.000 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:20.000 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.000 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.259 [2024-10-15 09:21:03.930688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.259 "name": "Existed_Raid", 00:21:20.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.259 "strip_size_kb": 64, 00:21:20.259 "state": "configuring", 00:21:20.259 "raid_level": "raid5f", 00:21:20.259 "superblock": false, 00:21:20.259 "num_base_bdevs": 3, 00:21:20.259 "num_base_bdevs_discovered": 1, 00:21:20.259 "num_base_bdevs_operational": 3, 00:21:20.259 "base_bdevs_list": [ 00:21:20.259 { 00:21:20.259 "name": "BaseBdev1", 00:21:20.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.259 "is_configured": false, 00:21:20.259 "data_offset": 0, 00:21:20.259 "data_size": 0 00:21:20.259 }, 00:21:20.259 { 00:21:20.259 "name": null, 00:21:20.259 "uuid": "562b286d-1419-4a66-baa6-b79c42c4d35f", 00:21:20.259 "is_configured": false, 00:21:20.259 "data_offset": 0, 00:21:20.259 "data_size": 65536 00:21:20.259 }, 00:21:20.259 { 00:21:20.259 "name": "BaseBdev3", 00:21:20.259 "uuid": "34a3cd28-ec65-4469-902b-ea0aca5870c0", 00:21:20.259 "is_configured": true, 00:21:20.259 "data_offset": 0, 00:21:20.259 "data_size": 65536 00:21:20.259 } 00:21:20.259 ] 00:21:20.259 }' 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.259 09:21:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.828 [2024-10-15 09:21:04.555415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:20.828 BaseBdev1 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.828 [ 00:21:20.828 { 00:21:20.828 "name": "BaseBdev1", 00:21:20.828 "aliases": [ 00:21:20.828 "9fb70d0b-812f-461f-bc1c-bfe1dd2fa04e" 00:21:20.828 ], 00:21:20.828 "product_name": "Malloc disk", 00:21:20.828 "block_size": 512, 00:21:20.828 "num_blocks": 65536, 00:21:20.828 "uuid": "9fb70d0b-812f-461f-bc1c-bfe1dd2fa04e", 00:21:20.828 "assigned_rate_limits": { 00:21:20.828 "rw_ios_per_sec": 0, 00:21:20.828 "rw_mbytes_per_sec": 0, 00:21:20.828 "r_mbytes_per_sec": 0, 00:21:20.828 "w_mbytes_per_sec": 0 00:21:20.828 }, 00:21:20.828 "claimed": true, 00:21:20.828 "claim_type": "exclusive_write", 00:21:20.828 "zoned": false, 00:21:20.828 "supported_io_types": { 00:21:20.828 "read": true, 00:21:20.828 "write": true, 00:21:20.828 "unmap": true, 00:21:20.828 "flush": true, 00:21:20.828 "reset": true, 00:21:20.828 "nvme_admin": false, 00:21:20.828 "nvme_io": false, 00:21:20.828 "nvme_io_md": false, 00:21:20.828 "write_zeroes": true, 00:21:20.828 "zcopy": true, 00:21:20.828 "get_zone_info": false, 00:21:20.828 "zone_management": false, 00:21:20.828 "zone_append": false, 00:21:20.828 "compare": false, 00:21:20.828 "compare_and_write": false, 00:21:20.828 "abort": true, 00:21:20.828 "seek_hole": false, 00:21:20.828 "seek_data": false, 00:21:20.828 "copy": true, 00:21:20.828 "nvme_iov_md": false 00:21:20.828 }, 00:21:20.828 "memory_domains": [ 00:21:20.828 { 00:21:20.828 "dma_device_id": "system", 00:21:20.828 "dma_device_type": 1 00:21:20.828 }, 00:21:20.828 { 00:21:20.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.828 "dma_device_type": 2 00:21:20.828 } 00:21:20.828 ], 00:21:20.828 "driver_specific": {} 00:21:20.828 } 00:21:20.828 ] 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.828 "name": "Existed_Raid", 00:21:20.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.828 "strip_size_kb": 64, 00:21:20.828 "state": "configuring", 00:21:20.828 "raid_level": "raid5f", 00:21:20.828 "superblock": false, 00:21:20.828 "num_base_bdevs": 3, 00:21:20.828 "num_base_bdevs_discovered": 2, 00:21:20.828 "num_base_bdevs_operational": 3, 00:21:20.828 "base_bdevs_list": [ 00:21:20.828 { 00:21:20.828 "name": "BaseBdev1", 00:21:20.828 "uuid": "9fb70d0b-812f-461f-bc1c-bfe1dd2fa04e", 00:21:20.828 "is_configured": true, 00:21:20.828 "data_offset": 0, 00:21:20.828 "data_size": 65536 00:21:20.828 }, 00:21:20.828 { 00:21:20.828 "name": null, 00:21:20.828 "uuid": "562b286d-1419-4a66-baa6-b79c42c4d35f", 00:21:20.828 "is_configured": false, 00:21:20.828 "data_offset": 0, 00:21:20.828 "data_size": 65536 00:21:20.828 }, 00:21:20.828 { 00:21:20.828 "name": "BaseBdev3", 00:21:20.828 "uuid": "34a3cd28-ec65-4469-902b-ea0aca5870c0", 00:21:20.828 "is_configured": true, 00:21:20.828 "data_offset": 0, 00:21:20.828 "data_size": 65536 00:21:20.828 } 00:21:20.828 ] 00:21:20.828 }' 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.828 09:21:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.396 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.396 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:21.396 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.397 [2024-10-15 09:21:05.187737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.397 "name": "Existed_Raid", 00:21:21.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.397 "strip_size_kb": 64, 00:21:21.397 "state": "configuring", 00:21:21.397 "raid_level": "raid5f", 00:21:21.397 "superblock": false, 00:21:21.397 "num_base_bdevs": 3, 00:21:21.397 "num_base_bdevs_discovered": 1, 00:21:21.397 "num_base_bdevs_operational": 3, 00:21:21.397 "base_bdevs_list": [ 00:21:21.397 { 00:21:21.397 "name": "BaseBdev1", 00:21:21.397 "uuid": "9fb70d0b-812f-461f-bc1c-bfe1dd2fa04e", 00:21:21.397 "is_configured": true, 00:21:21.397 "data_offset": 0, 00:21:21.397 "data_size": 65536 00:21:21.397 }, 00:21:21.397 { 00:21:21.397 "name": null, 00:21:21.397 "uuid": "562b286d-1419-4a66-baa6-b79c42c4d35f", 00:21:21.397 "is_configured": false, 00:21:21.397 "data_offset": 0, 00:21:21.397 "data_size": 65536 00:21:21.397 }, 00:21:21.397 { 00:21:21.397 "name": null, 00:21:21.397 "uuid": "34a3cd28-ec65-4469-902b-ea0aca5870c0", 00:21:21.397 "is_configured": false, 00:21:21.397 "data_offset": 0, 00:21:21.397 "data_size": 65536 00:21:21.397 } 00:21:21.397 ] 00:21:21.397 }' 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.397 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.965 [2024-10-15 09:21:05.779983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.965 "name": "Existed_Raid", 00:21:21.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.965 "strip_size_kb": 64, 00:21:21.965 "state": "configuring", 00:21:21.965 "raid_level": "raid5f", 00:21:21.965 "superblock": false, 00:21:21.965 "num_base_bdevs": 3, 00:21:21.965 "num_base_bdevs_discovered": 2, 00:21:21.965 "num_base_bdevs_operational": 3, 00:21:21.965 "base_bdevs_list": [ 00:21:21.965 { 00:21:21.965 "name": "BaseBdev1", 00:21:21.965 "uuid": "9fb70d0b-812f-461f-bc1c-bfe1dd2fa04e", 00:21:21.965 "is_configured": true, 00:21:21.965 "data_offset": 0, 00:21:21.965 "data_size": 65536 00:21:21.965 }, 00:21:21.965 { 00:21:21.965 "name": null, 00:21:21.965 "uuid": "562b286d-1419-4a66-baa6-b79c42c4d35f", 00:21:21.965 "is_configured": false, 00:21:21.965 "data_offset": 0, 00:21:21.965 "data_size": 65536 00:21:21.965 }, 00:21:21.965 { 00:21:21.965 "name": "BaseBdev3", 00:21:21.965 "uuid": "34a3cd28-ec65-4469-902b-ea0aca5870c0", 00:21:21.965 "is_configured": true, 00:21:21.965 "data_offset": 0, 00:21:21.965 "data_size": 65536 00:21:21.965 } 00:21:21.965 ] 00:21:21.965 }' 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.965 09:21:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.532 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.532 09:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.532 09:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.532 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:22.532 09:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.532 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:22.532 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:22.532 09:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.532 09:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.532 [2024-10-15 09:21:06.372128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.791 "name": "Existed_Raid", 00:21:22.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.791 "strip_size_kb": 64, 00:21:22.791 "state": "configuring", 00:21:22.791 "raid_level": "raid5f", 00:21:22.791 "superblock": false, 00:21:22.791 "num_base_bdevs": 3, 00:21:22.791 "num_base_bdevs_discovered": 1, 00:21:22.791 "num_base_bdevs_operational": 3, 00:21:22.791 "base_bdevs_list": [ 00:21:22.791 { 00:21:22.791 "name": null, 00:21:22.791 "uuid": "9fb70d0b-812f-461f-bc1c-bfe1dd2fa04e", 00:21:22.791 "is_configured": false, 00:21:22.791 "data_offset": 0, 00:21:22.791 "data_size": 65536 00:21:22.791 }, 00:21:22.791 { 00:21:22.791 "name": null, 00:21:22.791 "uuid": "562b286d-1419-4a66-baa6-b79c42c4d35f", 00:21:22.791 "is_configured": false, 00:21:22.791 "data_offset": 0, 00:21:22.791 "data_size": 65536 00:21:22.791 }, 00:21:22.791 { 00:21:22.791 "name": "BaseBdev3", 00:21:22.791 "uuid": "34a3cd28-ec65-4469-902b-ea0aca5870c0", 00:21:22.791 "is_configured": true, 00:21:22.791 "data_offset": 0, 00:21:22.791 "data_size": 65536 00:21:22.791 } 00:21:22.791 ] 00:21:22.791 }' 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.791 09:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.359 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.359 09:21:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:23.359 09:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.359 09:21:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.359 [2024-10-15 09:21:07.043580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.359 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.359 "name": "Existed_Raid", 00:21:23.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.359 "strip_size_kb": 64, 00:21:23.359 "state": "configuring", 00:21:23.359 "raid_level": "raid5f", 00:21:23.360 "superblock": false, 00:21:23.360 "num_base_bdevs": 3, 00:21:23.360 "num_base_bdevs_discovered": 2, 00:21:23.360 "num_base_bdevs_operational": 3, 00:21:23.360 "base_bdevs_list": [ 00:21:23.360 { 00:21:23.360 "name": null, 00:21:23.360 "uuid": "9fb70d0b-812f-461f-bc1c-bfe1dd2fa04e", 00:21:23.360 "is_configured": false, 00:21:23.360 "data_offset": 0, 00:21:23.360 "data_size": 65536 00:21:23.360 }, 00:21:23.360 { 00:21:23.360 "name": "BaseBdev2", 00:21:23.360 "uuid": "562b286d-1419-4a66-baa6-b79c42c4d35f", 00:21:23.360 "is_configured": true, 00:21:23.360 "data_offset": 0, 00:21:23.360 "data_size": 65536 00:21:23.360 }, 00:21:23.360 { 00:21:23.360 "name": "BaseBdev3", 00:21:23.360 "uuid": "34a3cd28-ec65-4469-902b-ea0aca5870c0", 00:21:23.360 "is_configured": true, 00:21:23.360 "data_offset": 0, 00:21:23.360 "data_size": 65536 00:21:23.360 } 00:21:23.360 ] 00:21:23.360 }' 00:21:23.360 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.360 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9fb70d0b-812f-461f-bc1c-bfe1dd2fa04e 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.928 [2024-10-15 09:21:07.754340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:23.928 [2024-10-15 09:21:07.754428] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:23.928 [2024-10-15 09:21:07.754446] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:23.928 [2024-10-15 09:21:07.754784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:23.928 [2024-10-15 09:21:07.759909] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:23.928 [2024-10-15 09:21:07.759936] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:23.928 [2024-10-15 09:21:07.760326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.928 NewBaseBdev 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.928 [ 00:21:23.928 { 00:21:23.928 "name": "NewBaseBdev", 00:21:23.928 "aliases": [ 00:21:23.928 "9fb70d0b-812f-461f-bc1c-bfe1dd2fa04e" 00:21:23.928 ], 00:21:23.928 "product_name": "Malloc disk", 00:21:23.928 "block_size": 512, 00:21:23.928 "num_blocks": 65536, 00:21:23.928 "uuid": "9fb70d0b-812f-461f-bc1c-bfe1dd2fa04e", 00:21:23.928 "assigned_rate_limits": { 00:21:23.928 "rw_ios_per_sec": 0, 00:21:23.928 "rw_mbytes_per_sec": 0, 00:21:23.928 "r_mbytes_per_sec": 0, 00:21:23.928 "w_mbytes_per_sec": 0 00:21:23.928 }, 00:21:23.928 "claimed": true, 00:21:23.928 "claim_type": "exclusive_write", 00:21:23.928 "zoned": false, 00:21:23.928 "supported_io_types": { 00:21:23.928 "read": true, 00:21:23.928 "write": true, 00:21:23.928 "unmap": true, 00:21:23.928 "flush": true, 00:21:23.928 "reset": true, 00:21:23.928 "nvme_admin": false, 00:21:23.928 "nvme_io": false, 00:21:23.928 "nvme_io_md": false, 00:21:23.928 "write_zeroes": true, 00:21:23.928 "zcopy": true, 00:21:23.928 "get_zone_info": false, 00:21:23.928 "zone_management": false, 00:21:23.928 "zone_append": false, 00:21:23.928 "compare": false, 00:21:23.928 "compare_and_write": false, 00:21:23.928 "abort": true, 00:21:23.928 "seek_hole": false, 00:21:23.928 "seek_data": false, 00:21:23.928 "copy": true, 00:21:23.928 "nvme_iov_md": false 00:21:23.928 }, 00:21:23.928 "memory_domains": [ 00:21:23.928 { 00:21:23.928 "dma_device_id": "system", 00:21:23.928 "dma_device_type": 1 00:21:23.928 }, 00:21:23.928 { 00:21:23.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.928 "dma_device_type": 2 00:21:23.928 } 00:21:23.928 ], 00:21:23.928 "driver_specific": {} 00:21:23.928 } 00:21:23.928 ] 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.928 "name": "Existed_Raid", 00:21:23.928 "uuid": "e459ce88-afb0-4932-83a1-0450d65c85c0", 00:21:23.928 "strip_size_kb": 64, 00:21:23.928 "state": "online", 00:21:23.928 "raid_level": "raid5f", 00:21:23.928 "superblock": false, 00:21:23.928 "num_base_bdevs": 3, 00:21:23.928 "num_base_bdevs_discovered": 3, 00:21:23.928 "num_base_bdevs_operational": 3, 00:21:23.928 "base_bdevs_list": [ 00:21:23.928 { 00:21:23.928 "name": "NewBaseBdev", 00:21:23.928 "uuid": "9fb70d0b-812f-461f-bc1c-bfe1dd2fa04e", 00:21:23.928 "is_configured": true, 00:21:23.928 "data_offset": 0, 00:21:23.928 "data_size": 65536 00:21:23.928 }, 00:21:23.928 { 00:21:23.928 "name": "BaseBdev2", 00:21:23.928 "uuid": "562b286d-1419-4a66-baa6-b79c42c4d35f", 00:21:23.928 "is_configured": true, 00:21:23.928 "data_offset": 0, 00:21:23.928 "data_size": 65536 00:21:23.928 }, 00:21:23.928 { 00:21:23.928 "name": "BaseBdev3", 00:21:23.928 "uuid": "34a3cd28-ec65-4469-902b-ea0aca5870c0", 00:21:23.928 "is_configured": true, 00:21:23.928 "data_offset": 0, 00:21:23.928 "data_size": 65536 00:21:23.928 } 00:21:23.928 ] 00:21:23.928 }' 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.928 09:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.506 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:24.506 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:24.506 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:24.506 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:24.506 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:24.506 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:24.506 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:24.506 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:24.506 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.506 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.506 [2024-10-15 09:21:08.318924] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:24.506 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.506 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:24.506 "name": "Existed_Raid", 00:21:24.506 "aliases": [ 00:21:24.506 "e459ce88-afb0-4932-83a1-0450d65c85c0" 00:21:24.506 ], 00:21:24.506 "product_name": "Raid Volume", 00:21:24.506 "block_size": 512, 00:21:24.506 "num_blocks": 131072, 00:21:24.506 "uuid": "e459ce88-afb0-4932-83a1-0450d65c85c0", 00:21:24.506 "assigned_rate_limits": { 00:21:24.506 "rw_ios_per_sec": 0, 00:21:24.506 "rw_mbytes_per_sec": 0, 00:21:24.506 "r_mbytes_per_sec": 0, 00:21:24.506 "w_mbytes_per_sec": 0 00:21:24.506 }, 00:21:24.506 "claimed": false, 00:21:24.506 "zoned": false, 00:21:24.506 "supported_io_types": { 00:21:24.506 "read": true, 00:21:24.506 "write": true, 00:21:24.506 "unmap": false, 00:21:24.506 "flush": false, 00:21:24.506 "reset": true, 00:21:24.506 "nvme_admin": false, 00:21:24.506 "nvme_io": false, 00:21:24.506 "nvme_io_md": false, 00:21:24.506 "write_zeroes": true, 00:21:24.506 "zcopy": false, 00:21:24.506 "get_zone_info": false, 00:21:24.506 "zone_management": false, 00:21:24.506 "zone_append": false, 00:21:24.506 "compare": false, 00:21:24.506 "compare_and_write": false, 00:21:24.506 "abort": false, 00:21:24.506 "seek_hole": false, 00:21:24.506 "seek_data": false, 00:21:24.506 "copy": false, 00:21:24.506 "nvme_iov_md": false 00:21:24.506 }, 00:21:24.506 "driver_specific": { 00:21:24.506 "raid": { 00:21:24.506 "uuid": "e459ce88-afb0-4932-83a1-0450d65c85c0", 00:21:24.506 "strip_size_kb": 64, 00:21:24.506 "state": "online", 00:21:24.506 "raid_level": "raid5f", 00:21:24.506 "superblock": false, 00:21:24.506 "num_base_bdevs": 3, 00:21:24.506 "num_base_bdevs_discovered": 3, 00:21:24.506 "num_base_bdevs_operational": 3, 00:21:24.506 "base_bdevs_list": [ 00:21:24.506 { 00:21:24.506 "name": "NewBaseBdev", 00:21:24.506 "uuid": "9fb70d0b-812f-461f-bc1c-bfe1dd2fa04e", 00:21:24.506 "is_configured": true, 00:21:24.506 "data_offset": 0, 00:21:24.506 "data_size": 65536 00:21:24.506 }, 00:21:24.506 { 00:21:24.506 "name": "BaseBdev2", 00:21:24.506 "uuid": "562b286d-1419-4a66-baa6-b79c42c4d35f", 00:21:24.506 "is_configured": true, 00:21:24.506 "data_offset": 0, 00:21:24.506 "data_size": 65536 00:21:24.506 }, 00:21:24.506 { 00:21:24.506 "name": "BaseBdev3", 00:21:24.506 "uuid": "34a3cd28-ec65-4469-902b-ea0aca5870c0", 00:21:24.506 "is_configured": true, 00:21:24.506 "data_offset": 0, 00:21:24.506 "data_size": 65536 00:21:24.507 } 00:21:24.507 ] 00:21:24.507 } 00:21:24.507 } 00:21:24.507 }' 00:21:24.507 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:24.507 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:24.507 BaseBdev2 00:21:24.507 BaseBdev3' 00:21:24.507 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.764 [2024-10-15 09:21:08.646752] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:24.764 [2024-10-15 09:21:08.646799] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:24.764 [2024-10-15 09:21:08.646953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:24.764 [2024-10-15 09:21:08.647391] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:24.764 [2024-10-15 09:21:08.647420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80528 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80528 ']' 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80528 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80528 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80528' 00:21:24.764 killing process with pid 80528 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 80528 00:21:24.764 [2024-10-15 09:21:08.687015] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:24.764 09:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 80528 00:21:25.394 [2024-10-15 09:21:08.992626] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:26.328 ************************************ 00:21:26.328 END TEST raid5f_state_function_test 00:21:26.328 ************************************ 00:21:26.328 00:21:26.328 real 0m12.126s 00:21:26.328 user 0m19.787s 00:21:26.328 sys 0m1.864s 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.328 09:21:10 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:21:26.328 09:21:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:26.328 09:21:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:26.328 09:21:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:26.328 ************************************ 00:21:26.328 START TEST raid5f_state_function_test_sb 00:21:26.328 ************************************ 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:26.328 Process raid pid: 81164 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81164 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81164' 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81164 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81164 ']' 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:26.328 09:21:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.587 [2024-10-15 09:21:10.317005] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:21:26.587 [2024-10-15 09:21:10.318262] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.587 [2024-10-15 09:21:10.494872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.844 [2024-10-15 09:21:10.648886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.145 [2024-10-15 09:21:10.888301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.145 [2024-10-15 09:21:10.888623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.406 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:27.406 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:21:27.406 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.407 [2024-10-15 09:21:11.252109] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:27.407 [2024-10-15 09:21:11.252199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:27.407 [2024-10-15 09:21:11.252217] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:27.407 [2024-10-15 09:21:11.252235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:27.407 [2024-10-15 09:21:11.252246] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:27.407 [2024-10-15 09:21:11.252261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.407 "name": "Existed_Raid", 00:21:27.407 "uuid": "758ace54-346b-42e8-ad66-131f60b8e7ad", 00:21:27.407 "strip_size_kb": 64, 00:21:27.407 "state": "configuring", 00:21:27.407 "raid_level": "raid5f", 00:21:27.407 "superblock": true, 00:21:27.407 "num_base_bdevs": 3, 00:21:27.407 "num_base_bdevs_discovered": 0, 00:21:27.407 "num_base_bdevs_operational": 3, 00:21:27.407 "base_bdevs_list": [ 00:21:27.407 { 00:21:27.407 "name": "BaseBdev1", 00:21:27.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.407 "is_configured": false, 00:21:27.407 "data_offset": 0, 00:21:27.407 "data_size": 0 00:21:27.407 }, 00:21:27.407 { 00:21:27.407 "name": "BaseBdev2", 00:21:27.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.407 "is_configured": false, 00:21:27.407 "data_offset": 0, 00:21:27.407 "data_size": 0 00:21:27.407 }, 00:21:27.407 { 00:21:27.407 "name": "BaseBdev3", 00:21:27.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.407 "is_configured": false, 00:21:27.407 "data_offset": 0, 00:21:27.407 "data_size": 0 00:21:27.407 } 00:21:27.407 ] 00:21:27.407 }' 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.407 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.974 [2024-10-15 09:21:11.772174] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:27.974 [2024-10-15 09:21:11.772234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.974 [2024-10-15 09:21:11.780187] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:27.974 [2024-10-15 09:21:11.780268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:27.974 [2024-10-15 09:21:11.780285] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:27.974 [2024-10-15 09:21:11.780302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:27.974 [2024-10-15 09:21:11.780312] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:27.974 [2024-10-15 09:21:11.780327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.974 [2024-10-15 09:21:11.829002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:27.974 BaseBdev1 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.974 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.974 [ 00:21:27.974 { 00:21:27.974 "name": "BaseBdev1", 00:21:27.974 "aliases": [ 00:21:27.974 "9623dd45-f965-4969-b0e3-9709567e2579" 00:21:27.974 ], 00:21:27.974 "product_name": "Malloc disk", 00:21:27.974 "block_size": 512, 00:21:27.974 "num_blocks": 65536, 00:21:27.974 "uuid": "9623dd45-f965-4969-b0e3-9709567e2579", 00:21:27.974 "assigned_rate_limits": { 00:21:27.974 "rw_ios_per_sec": 0, 00:21:27.974 "rw_mbytes_per_sec": 0, 00:21:27.974 "r_mbytes_per_sec": 0, 00:21:27.975 "w_mbytes_per_sec": 0 00:21:27.975 }, 00:21:27.975 "claimed": true, 00:21:27.975 "claim_type": "exclusive_write", 00:21:27.975 "zoned": false, 00:21:27.975 "supported_io_types": { 00:21:27.975 "read": true, 00:21:27.975 "write": true, 00:21:27.975 "unmap": true, 00:21:27.975 "flush": true, 00:21:27.975 "reset": true, 00:21:27.975 "nvme_admin": false, 00:21:27.975 "nvme_io": false, 00:21:27.975 "nvme_io_md": false, 00:21:27.975 "write_zeroes": true, 00:21:27.975 "zcopy": true, 00:21:27.975 "get_zone_info": false, 00:21:27.975 "zone_management": false, 00:21:27.975 "zone_append": false, 00:21:27.975 "compare": false, 00:21:27.975 "compare_and_write": false, 00:21:27.975 "abort": true, 00:21:27.975 "seek_hole": false, 00:21:27.975 "seek_data": false, 00:21:27.975 "copy": true, 00:21:27.975 "nvme_iov_md": false 00:21:27.975 }, 00:21:27.975 "memory_domains": [ 00:21:27.975 { 00:21:27.975 "dma_device_id": "system", 00:21:27.975 "dma_device_type": 1 00:21:27.975 }, 00:21:27.975 { 00:21:27.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.975 "dma_device_type": 2 00:21:27.975 } 00:21:27.975 ], 00:21:27.975 "driver_specific": {} 00:21:27.975 } 00:21:27.975 ] 00:21:27.975 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.975 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:27.975 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:27.975 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:27.975 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:27.975 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:27.975 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:27.975 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:27.975 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.975 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.975 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.975 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.975 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.975 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.975 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.975 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.975 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.233 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.233 "name": "Existed_Raid", 00:21:28.233 "uuid": "5e7a43b1-f727-4693-982e-cec5f86090ec", 00:21:28.233 "strip_size_kb": 64, 00:21:28.233 "state": "configuring", 00:21:28.233 "raid_level": "raid5f", 00:21:28.233 "superblock": true, 00:21:28.233 "num_base_bdevs": 3, 00:21:28.233 "num_base_bdevs_discovered": 1, 00:21:28.233 "num_base_bdevs_operational": 3, 00:21:28.233 "base_bdevs_list": [ 00:21:28.233 { 00:21:28.233 "name": "BaseBdev1", 00:21:28.233 "uuid": "9623dd45-f965-4969-b0e3-9709567e2579", 00:21:28.233 "is_configured": true, 00:21:28.233 "data_offset": 2048, 00:21:28.233 "data_size": 63488 00:21:28.233 }, 00:21:28.233 { 00:21:28.233 "name": "BaseBdev2", 00:21:28.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.233 "is_configured": false, 00:21:28.233 "data_offset": 0, 00:21:28.233 "data_size": 0 00:21:28.233 }, 00:21:28.233 { 00:21:28.233 "name": "BaseBdev3", 00:21:28.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.233 "is_configured": false, 00:21:28.233 "data_offset": 0, 00:21:28.233 "data_size": 0 00:21:28.233 } 00:21:28.233 ] 00:21:28.233 }' 00:21:28.233 09:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.233 09:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.493 [2024-10-15 09:21:12.389258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:28.493 [2024-10-15 09:21:12.389519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.493 [2024-10-15 09:21:12.401389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:28.493 [2024-10-15 09:21:12.404140] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:28.493 [2024-10-15 09:21:12.404218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:28.493 [2024-10-15 09:21:12.404247] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:28.493 [2024-10-15 09:21:12.404264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.493 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.751 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.751 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.751 "name": "Existed_Raid", 00:21:28.751 "uuid": "eb06ba5d-0bb1-402f-9fbb-5db81751eb07", 00:21:28.751 "strip_size_kb": 64, 00:21:28.751 "state": "configuring", 00:21:28.751 "raid_level": "raid5f", 00:21:28.751 "superblock": true, 00:21:28.751 "num_base_bdevs": 3, 00:21:28.751 "num_base_bdevs_discovered": 1, 00:21:28.751 "num_base_bdevs_operational": 3, 00:21:28.751 "base_bdevs_list": [ 00:21:28.751 { 00:21:28.751 "name": "BaseBdev1", 00:21:28.751 "uuid": "9623dd45-f965-4969-b0e3-9709567e2579", 00:21:28.751 "is_configured": true, 00:21:28.751 "data_offset": 2048, 00:21:28.751 "data_size": 63488 00:21:28.751 }, 00:21:28.751 { 00:21:28.751 "name": "BaseBdev2", 00:21:28.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.751 "is_configured": false, 00:21:28.751 "data_offset": 0, 00:21:28.751 "data_size": 0 00:21:28.751 }, 00:21:28.751 { 00:21:28.751 "name": "BaseBdev3", 00:21:28.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.751 "is_configured": false, 00:21:28.751 "data_offset": 0, 00:21:28.751 "data_size": 0 00:21:28.751 } 00:21:28.751 ] 00:21:28.751 }' 00:21:28.751 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.751 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.320 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:29.320 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.320 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.320 [2024-10-15 09:21:12.984500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:29.320 BaseBdev2 00:21:29.320 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.320 09:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:29.320 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:29.320 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:29.320 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:29.320 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:29.320 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:29.320 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:29.320 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.320 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.320 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.320 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:29.320 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.320 09:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.320 [ 00:21:29.320 { 00:21:29.320 "name": "BaseBdev2", 00:21:29.320 "aliases": [ 00:21:29.320 "e5091c16-b950-421b-bb13-137f6105be61" 00:21:29.320 ], 00:21:29.320 "product_name": "Malloc disk", 00:21:29.320 "block_size": 512, 00:21:29.320 "num_blocks": 65536, 00:21:29.320 "uuid": "e5091c16-b950-421b-bb13-137f6105be61", 00:21:29.320 "assigned_rate_limits": { 00:21:29.320 "rw_ios_per_sec": 0, 00:21:29.320 "rw_mbytes_per_sec": 0, 00:21:29.320 "r_mbytes_per_sec": 0, 00:21:29.320 "w_mbytes_per_sec": 0 00:21:29.320 }, 00:21:29.320 "claimed": true, 00:21:29.320 "claim_type": "exclusive_write", 00:21:29.320 "zoned": false, 00:21:29.320 "supported_io_types": { 00:21:29.320 "read": true, 00:21:29.320 "write": true, 00:21:29.320 "unmap": true, 00:21:29.320 "flush": true, 00:21:29.320 "reset": true, 00:21:29.320 "nvme_admin": false, 00:21:29.320 "nvme_io": false, 00:21:29.320 "nvme_io_md": false, 00:21:29.320 "write_zeroes": true, 00:21:29.320 "zcopy": true, 00:21:29.320 "get_zone_info": false, 00:21:29.320 "zone_management": false, 00:21:29.320 "zone_append": false, 00:21:29.320 "compare": false, 00:21:29.320 "compare_and_write": false, 00:21:29.320 "abort": true, 00:21:29.320 "seek_hole": false, 00:21:29.320 "seek_data": false, 00:21:29.320 "copy": true, 00:21:29.320 "nvme_iov_md": false 00:21:29.320 }, 00:21:29.320 "memory_domains": [ 00:21:29.320 { 00:21:29.320 "dma_device_id": "system", 00:21:29.320 "dma_device_type": 1 00:21:29.320 }, 00:21:29.320 { 00:21:29.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.320 "dma_device_type": 2 00:21:29.320 } 00:21:29.320 ], 00:21:29.320 "driver_specific": {} 00:21:29.320 } 00:21:29.320 ] 00:21:29.320 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.320 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:29.320 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:29.320 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:29.320 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:29.320 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:29.320 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:29.320 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:29.320 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:29.320 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:29.320 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.320 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.320 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.320 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.320 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.321 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.321 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.321 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.321 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.321 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.321 "name": "Existed_Raid", 00:21:29.321 "uuid": "eb06ba5d-0bb1-402f-9fbb-5db81751eb07", 00:21:29.321 "strip_size_kb": 64, 00:21:29.321 "state": "configuring", 00:21:29.321 "raid_level": "raid5f", 00:21:29.321 "superblock": true, 00:21:29.321 "num_base_bdevs": 3, 00:21:29.321 "num_base_bdevs_discovered": 2, 00:21:29.321 "num_base_bdevs_operational": 3, 00:21:29.321 "base_bdevs_list": [ 00:21:29.321 { 00:21:29.321 "name": "BaseBdev1", 00:21:29.321 "uuid": "9623dd45-f965-4969-b0e3-9709567e2579", 00:21:29.321 "is_configured": true, 00:21:29.321 "data_offset": 2048, 00:21:29.321 "data_size": 63488 00:21:29.321 }, 00:21:29.321 { 00:21:29.321 "name": "BaseBdev2", 00:21:29.321 "uuid": "e5091c16-b950-421b-bb13-137f6105be61", 00:21:29.321 "is_configured": true, 00:21:29.321 "data_offset": 2048, 00:21:29.321 "data_size": 63488 00:21:29.321 }, 00:21:29.321 { 00:21:29.321 "name": "BaseBdev3", 00:21:29.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.321 "is_configured": false, 00:21:29.321 "data_offset": 0, 00:21:29.321 "data_size": 0 00:21:29.321 } 00:21:29.321 ] 00:21:29.321 }' 00:21:29.321 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.321 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.889 [2024-10-15 09:21:13.603769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:29.889 [2024-10-15 09:21:13.604268] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:29.889 [2024-10-15 09:21:13.604319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:29.889 BaseBdev3 00:21:29.889 [2024-10-15 09:21:13.604739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.889 [2024-10-15 09:21:13.611631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:29.889 [2024-10-15 09:21:13.611856] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:29.889 [2024-10-15 09:21:13.612416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.889 [ 00:21:29.889 { 00:21:29.889 "name": "BaseBdev3", 00:21:29.889 "aliases": [ 00:21:29.889 "d1508699-ed51-4713-93b3-e8ff2f43395f" 00:21:29.889 ], 00:21:29.889 "product_name": "Malloc disk", 00:21:29.889 "block_size": 512, 00:21:29.889 "num_blocks": 65536, 00:21:29.889 "uuid": "d1508699-ed51-4713-93b3-e8ff2f43395f", 00:21:29.889 "assigned_rate_limits": { 00:21:29.889 "rw_ios_per_sec": 0, 00:21:29.889 "rw_mbytes_per_sec": 0, 00:21:29.889 "r_mbytes_per_sec": 0, 00:21:29.889 "w_mbytes_per_sec": 0 00:21:29.889 }, 00:21:29.889 "claimed": true, 00:21:29.889 "claim_type": "exclusive_write", 00:21:29.889 "zoned": false, 00:21:29.889 "supported_io_types": { 00:21:29.889 "read": true, 00:21:29.889 "write": true, 00:21:29.889 "unmap": true, 00:21:29.889 "flush": true, 00:21:29.889 "reset": true, 00:21:29.889 "nvme_admin": false, 00:21:29.889 "nvme_io": false, 00:21:29.889 "nvme_io_md": false, 00:21:29.889 "write_zeroes": true, 00:21:29.889 "zcopy": true, 00:21:29.889 "get_zone_info": false, 00:21:29.889 "zone_management": false, 00:21:29.889 "zone_append": false, 00:21:29.889 "compare": false, 00:21:29.889 "compare_and_write": false, 00:21:29.889 "abort": true, 00:21:29.889 "seek_hole": false, 00:21:29.889 "seek_data": false, 00:21:29.889 "copy": true, 00:21:29.889 "nvme_iov_md": false 00:21:29.889 }, 00:21:29.889 "memory_domains": [ 00:21:29.889 { 00:21:29.889 "dma_device_id": "system", 00:21:29.889 "dma_device_type": 1 00:21:29.889 }, 00:21:29.889 { 00:21:29.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.889 "dma_device_type": 2 00:21:29.889 } 00:21:29.889 ], 00:21:29.889 "driver_specific": {} 00:21:29.889 } 00:21:29.889 ] 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.889 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.889 "name": "Existed_Raid", 00:21:29.889 "uuid": "eb06ba5d-0bb1-402f-9fbb-5db81751eb07", 00:21:29.889 "strip_size_kb": 64, 00:21:29.889 "state": "online", 00:21:29.890 "raid_level": "raid5f", 00:21:29.890 "superblock": true, 00:21:29.890 "num_base_bdevs": 3, 00:21:29.890 "num_base_bdevs_discovered": 3, 00:21:29.890 "num_base_bdevs_operational": 3, 00:21:29.890 "base_bdevs_list": [ 00:21:29.890 { 00:21:29.890 "name": "BaseBdev1", 00:21:29.890 "uuid": "9623dd45-f965-4969-b0e3-9709567e2579", 00:21:29.890 "is_configured": true, 00:21:29.890 "data_offset": 2048, 00:21:29.890 "data_size": 63488 00:21:29.890 }, 00:21:29.890 { 00:21:29.890 "name": "BaseBdev2", 00:21:29.890 "uuid": "e5091c16-b950-421b-bb13-137f6105be61", 00:21:29.890 "is_configured": true, 00:21:29.890 "data_offset": 2048, 00:21:29.890 "data_size": 63488 00:21:29.890 }, 00:21:29.890 { 00:21:29.890 "name": "BaseBdev3", 00:21:29.890 "uuid": "d1508699-ed51-4713-93b3-e8ff2f43395f", 00:21:29.890 "is_configured": true, 00:21:29.890 "data_offset": 2048, 00:21:29.890 "data_size": 63488 00:21:29.890 } 00:21:29.890 ] 00:21:29.890 }' 00:21:29.890 09:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.890 09:21:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.458 [2024-10-15 09:21:14.176339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:30.458 "name": "Existed_Raid", 00:21:30.458 "aliases": [ 00:21:30.458 "eb06ba5d-0bb1-402f-9fbb-5db81751eb07" 00:21:30.458 ], 00:21:30.458 "product_name": "Raid Volume", 00:21:30.458 "block_size": 512, 00:21:30.458 "num_blocks": 126976, 00:21:30.458 "uuid": "eb06ba5d-0bb1-402f-9fbb-5db81751eb07", 00:21:30.458 "assigned_rate_limits": { 00:21:30.458 "rw_ios_per_sec": 0, 00:21:30.458 "rw_mbytes_per_sec": 0, 00:21:30.458 "r_mbytes_per_sec": 0, 00:21:30.458 "w_mbytes_per_sec": 0 00:21:30.458 }, 00:21:30.458 "claimed": false, 00:21:30.458 "zoned": false, 00:21:30.458 "supported_io_types": { 00:21:30.458 "read": true, 00:21:30.458 "write": true, 00:21:30.458 "unmap": false, 00:21:30.458 "flush": false, 00:21:30.458 "reset": true, 00:21:30.458 "nvme_admin": false, 00:21:30.458 "nvme_io": false, 00:21:30.458 "nvme_io_md": false, 00:21:30.458 "write_zeroes": true, 00:21:30.458 "zcopy": false, 00:21:30.458 "get_zone_info": false, 00:21:30.458 "zone_management": false, 00:21:30.458 "zone_append": false, 00:21:30.458 "compare": false, 00:21:30.458 "compare_and_write": false, 00:21:30.458 "abort": false, 00:21:30.458 "seek_hole": false, 00:21:30.458 "seek_data": false, 00:21:30.458 "copy": false, 00:21:30.458 "nvme_iov_md": false 00:21:30.458 }, 00:21:30.458 "driver_specific": { 00:21:30.458 "raid": { 00:21:30.458 "uuid": "eb06ba5d-0bb1-402f-9fbb-5db81751eb07", 00:21:30.458 "strip_size_kb": 64, 00:21:30.458 "state": "online", 00:21:30.458 "raid_level": "raid5f", 00:21:30.458 "superblock": true, 00:21:30.458 "num_base_bdevs": 3, 00:21:30.458 "num_base_bdevs_discovered": 3, 00:21:30.458 "num_base_bdevs_operational": 3, 00:21:30.458 "base_bdevs_list": [ 00:21:30.458 { 00:21:30.458 "name": "BaseBdev1", 00:21:30.458 "uuid": "9623dd45-f965-4969-b0e3-9709567e2579", 00:21:30.458 "is_configured": true, 00:21:30.458 "data_offset": 2048, 00:21:30.458 "data_size": 63488 00:21:30.458 }, 00:21:30.458 { 00:21:30.458 "name": "BaseBdev2", 00:21:30.458 "uuid": "e5091c16-b950-421b-bb13-137f6105be61", 00:21:30.458 "is_configured": true, 00:21:30.458 "data_offset": 2048, 00:21:30.458 "data_size": 63488 00:21:30.458 }, 00:21:30.458 { 00:21:30.458 "name": "BaseBdev3", 00:21:30.458 "uuid": "d1508699-ed51-4713-93b3-e8ff2f43395f", 00:21:30.458 "is_configured": true, 00:21:30.458 "data_offset": 2048, 00:21:30.458 "data_size": 63488 00:21:30.458 } 00:21:30.458 ] 00:21:30.458 } 00:21:30.458 } 00:21:30.458 }' 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:30.458 BaseBdev2 00:21:30.458 BaseBdev3' 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:30.458 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.459 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.459 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.717 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:30.717 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:30.717 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:30.717 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:30.717 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.717 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:30.717 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.717 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.717 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:30.717 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:30.717 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:30.717 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.718 [2024-10-15 09:21:14.504095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.718 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.976 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.976 "name": "Existed_Raid", 00:21:30.976 "uuid": "eb06ba5d-0bb1-402f-9fbb-5db81751eb07", 00:21:30.976 "strip_size_kb": 64, 00:21:30.976 "state": "online", 00:21:30.976 "raid_level": "raid5f", 00:21:30.976 "superblock": true, 00:21:30.976 "num_base_bdevs": 3, 00:21:30.976 "num_base_bdevs_discovered": 2, 00:21:30.976 "num_base_bdevs_operational": 2, 00:21:30.976 "base_bdevs_list": [ 00:21:30.976 { 00:21:30.976 "name": null, 00:21:30.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.976 "is_configured": false, 00:21:30.976 "data_offset": 0, 00:21:30.976 "data_size": 63488 00:21:30.976 }, 00:21:30.976 { 00:21:30.976 "name": "BaseBdev2", 00:21:30.976 "uuid": "e5091c16-b950-421b-bb13-137f6105be61", 00:21:30.976 "is_configured": true, 00:21:30.976 "data_offset": 2048, 00:21:30.976 "data_size": 63488 00:21:30.976 }, 00:21:30.976 { 00:21:30.976 "name": "BaseBdev3", 00:21:30.976 "uuid": "d1508699-ed51-4713-93b3-e8ff2f43395f", 00:21:30.976 "is_configured": true, 00:21:30.976 "data_offset": 2048, 00:21:30.976 "data_size": 63488 00:21:30.976 } 00:21:30.976 ] 00:21:30.976 }' 00:21:30.976 09:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.976 09:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.234 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:31.234 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:31.234 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.234 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:31.234 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.234 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.234 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.493 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:31.493 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:31.493 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:31.493 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.493 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.493 [2024-10-15 09:21:15.198566] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:31.493 [2024-10-15 09:21:15.198834] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:31.493 [2024-10-15 09:21:15.285481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:31.493 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.493 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:31.493 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:31.493 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.493 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.493 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:31.493 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.493 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.493 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:31.493 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:31.493 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:31.493 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.493 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.493 [2024-10-15 09:21:15.345622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:31.493 [2024-10-15 09:21:15.345699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.753 BaseBdev2 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.753 [ 00:21:31.753 { 00:21:31.753 "name": "BaseBdev2", 00:21:31.753 "aliases": [ 00:21:31.753 "420bbe4a-bf66-4493-ba34-bb92c6d6cfcf" 00:21:31.753 ], 00:21:31.753 "product_name": "Malloc disk", 00:21:31.753 "block_size": 512, 00:21:31.753 "num_blocks": 65536, 00:21:31.753 "uuid": "420bbe4a-bf66-4493-ba34-bb92c6d6cfcf", 00:21:31.753 "assigned_rate_limits": { 00:21:31.753 "rw_ios_per_sec": 0, 00:21:31.753 "rw_mbytes_per_sec": 0, 00:21:31.753 "r_mbytes_per_sec": 0, 00:21:31.753 "w_mbytes_per_sec": 0 00:21:31.753 }, 00:21:31.753 "claimed": false, 00:21:31.753 "zoned": false, 00:21:31.753 "supported_io_types": { 00:21:31.753 "read": true, 00:21:31.753 "write": true, 00:21:31.753 "unmap": true, 00:21:31.753 "flush": true, 00:21:31.753 "reset": true, 00:21:31.753 "nvme_admin": false, 00:21:31.753 "nvme_io": false, 00:21:31.753 "nvme_io_md": false, 00:21:31.753 "write_zeroes": true, 00:21:31.753 "zcopy": true, 00:21:31.753 "get_zone_info": false, 00:21:31.753 "zone_management": false, 00:21:31.753 "zone_append": false, 00:21:31.753 "compare": false, 00:21:31.753 "compare_and_write": false, 00:21:31.753 "abort": true, 00:21:31.753 "seek_hole": false, 00:21:31.753 "seek_data": false, 00:21:31.753 "copy": true, 00:21:31.753 "nvme_iov_md": false 00:21:31.753 }, 00:21:31.753 "memory_domains": [ 00:21:31.753 { 00:21:31.753 "dma_device_id": "system", 00:21:31.753 "dma_device_type": 1 00:21:31.753 }, 00:21:31.753 { 00:21:31.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.753 "dma_device_type": 2 00:21:31.753 } 00:21:31.753 ], 00:21:31.753 "driver_specific": {} 00:21:31.753 } 00:21:31.753 ] 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.753 BaseBdev3 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:31.753 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.754 [ 00:21:31.754 { 00:21:31.754 "name": "BaseBdev3", 00:21:31.754 "aliases": [ 00:21:31.754 "3e140ec2-7562-46ed-9b12-36b522aba858" 00:21:31.754 ], 00:21:31.754 "product_name": "Malloc disk", 00:21:31.754 "block_size": 512, 00:21:31.754 "num_blocks": 65536, 00:21:31.754 "uuid": "3e140ec2-7562-46ed-9b12-36b522aba858", 00:21:31.754 "assigned_rate_limits": { 00:21:31.754 "rw_ios_per_sec": 0, 00:21:31.754 "rw_mbytes_per_sec": 0, 00:21:31.754 "r_mbytes_per_sec": 0, 00:21:31.754 "w_mbytes_per_sec": 0 00:21:31.754 }, 00:21:31.754 "claimed": false, 00:21:31.754 "zoned": false, 00:21:31.754 "supported_io_types": { 00:21:31.754 "read": true, 00:21:31.754 "write": true, 00:21:31.754 "unmap": true, 00:21:31.754 "flush": true, 00:21:31.754 "reset": true, 00:21:31.754 "nvme_admin": false, 00:21:31.754 "nvme_io": false, 00:21:31.754 "nvme_io_md": false, 00:21:31.754 "write_zeroes": true, 00:21:31.754 "zcopy": true, 00:21:31.754 "get_zone_info": false, 00:21:31.754 "zone_management": false, 00:21:31.754 "zone_append": false, 00:21:31.754 "compare": false, 00:21:31.754 "compare_and_write": false, 00:21:31.754 "abort": true, 00:21:31.754 "seek_hole": false, 00:21:31.754 "seek_data": false, 00:21:31.754 "copy": true, 00:21:31.754 "nvme_iov_md": false 00:21:31.754 }, 00:21:31.754 "memory_domains": [ 00:21:31.754 { 00:21:31.754 "dma_device_id": "system", 00:21:31.754 "dma_device_type": 1 00:21:31.754 }, 00:21:31.754 { 00:21:31.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.754 "dma_device_type": 2 00:21:31.754 } 00:21:31.754 ], 00:21:31.754 "driver_specific": {} 00:21:31.754 } 00:21:31.754 ] 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.754 [2024-10-15 09:21:15.661821] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:31.754 [2024-10-15 09:21:15.662027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:31.754 [2024-10-15 09:21:15.662189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:31.754 [2024-10-15 09:21:15.664921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.754 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.013 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.013 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.013 "name": "Existed_Raid", 00:21:32.013 "uuid": "fc8402ea-7cf4-42a5-95a5-a7b6d71548c3", 00:21:32.013 "strip_size_kb": 64, 00:21:32.013 "state": "configuring", 00:21:32.013 "raid_level": "raid5f", 00:21:32.013 "superblock": true, 00:21:32.013 "num_base_bdevs": 3, 00:21:32.013 "num_base_bdevs_discovered": 2, 00:21:32.013 "num_base_bdevs_operational": 3, 00:21:32.013 "base_bdevs_list": [ 00:21:32.013 { 00:21:32.013 "name": "BaseBdev1", 00:21:32.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.013 "is_configured": false, 00:21:32.013 "data_offset": 0, 00:21:32.013 "data_size": 0 00:21:32.013 }, 00:21:32.013 { 00:21:32.013 "name": "BaseBdev2", 00:21:32.013 "uuid": "420bbe4a-bf66-4493-ba34-bb92c6d6cfcf", 00:21:32.013 "is_configured": true, 00:21:32.013 "data_offset": 2048, 00:21:32.013 "data_size": 63488 00:21:32.013 }, 00:21:32.013 { 00:21:32.013 "name": "BaseBdev3", 00:21:32.013 "uuid": "3e140ec2-7562-46ed-9b12-36b522aba858", 00:21:32.013 "is_configured": true, 00:21:32.013 "data_offset": 2048, 00:21:32.013 "data_size": 63488 00:21:32.013 } 00:21:32.013 ] 00:21:32.013 }' 00:21:32.013 09:21:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.013 09:21:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.272 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:32.272 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.272 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.272 [2024-10-15 09:21:16.185906] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:32.272 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.272 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:32.272 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:32.272 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:32.272 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:32.272 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:32.272 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:32.272 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.272 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.272 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.272 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.272 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.272 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.272 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.272 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.531 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.531 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.531 "name": "Existed_Raid", 00:21:32.531 "uuid": "fc8402ea-7cf4-42a5-95a5-a7b6d71548c3", 00:21:32.531 "strip_size_kb": 64, 00:21:32.531 "state": "configuring", 00:21:32.531 "raid_level": "raid5f", 00:21:32.531 "superblock": true, 00:21:32.531 "num_base_bdevs": 3, 00:21:32.531 "num_base_bdevs_discovered": 1, 00:21:32.531 "num_base_bdevs_operational": 3, 00:21:32.531 "base_bdevs_list": [ 00:21:32.531 { 00:21:32.531 "name": "BaseBdev1", 00:21:32.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.531 "is_configured": false, 00:21:32.531 "data_offset": 0, 00:21:32.531 "data_size": 0 00:21:32.531 }, 00:21:32.531 { 00:21:32.531 "name": null, 00:21:32.531 "uuid": "420bbe4a-bf66-4493-ba34-bb92c6d6cfcf", 00:21:32.531 "is_configured": false, 00:21:32.531 "data_offset": 0, 00:21:32.531 "data_size": 63488 00:21:32.531 }, 00:21:32.531 { 00:21:32.531 "name": "BaseBdev3", 00:21:32.531 "uuid": "3e140ec2-7562-46ed-9b12-36b522aba858", 00:21:32.531 "is_configured": true, 00:21:32.531 "data_offset": 2048, 00:21:32.531 "data_size": 63488 00:21:32.531 } 00:21:32.531 ] 00:21:32.531 }' 00:21:32.531 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.531 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.791 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.791 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.791 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.791 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.050 [2024-10-15 09:21:16.832958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:33.050 BaseBdev1 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.050 [ 00:21:33.050 { 00:21:33.050 "name": "BaseBdev1", 00:21:33.050 "aliases": [ 00:21:33.050 "c1557ee1-e1b8-4cd3-aeb9-733a24cdeff1" 00:21:33.050 ], 00:21:33.050 "product_name": "Malloc disk", 00:21:33.050 "block_size": 512, 00:21:33.050 "num_blocks": 65536, 00:21:33.050 "uuid": "c1557ee1-e1b8-4cd3-aeb9-733a24cdeff1", 00:21:33.050 "assigned_rate_limits": { 00:21:33.050 "rw_ios_per_sec": 0, 00:21:33.050 "rw_mbytes_per_sec": 0, 00:21:33.050 "r_mbytes_per_sec": 0, 00:21:33.050 "w_mbytes_per_sec": 0 00:21:33.050 }, 00:21:33.050 "claimed": true, 00:21:33.050 "claim_type": "exclusive_write", 00:21:33.050 "zoned": false, 00:21:33.050 "supported_io_types": { 00:21:33.050 "read": true, 00:21:33.050 "write": true, 00:21:33.050 "unmap": true, 00:21:33.050 "flush": true, 00:21:33.050 "reset": true, 00:21:33.050 "nvme_admin": false, 00:21:33.050 "nvme_io": false, 00:21:33.050 "nvme_io_md": false, 00:21:33.050 "write_zeroes": true, 00:21:33.050 "zcopy": true, 00:21:33.050 "get_zone_info": false, 00:21:33.050 "zone_management": false, 00:21:33.050 "zone_append": false, 00:21:33.050 "compare": false, 00:21:33.050 "compare_and_write": false, 00:21:33.050 "abort": true, 00:21:33.050 "seek_hole": false, 00:21:33.050 "seek_data": false, 00:21:33.050 "copy": true, 00:21:33.050 "nvme_iov_md": false 00:21:33.050 }, 00:21:33.050 "memory_domains": [ 00:21:33.050 { 00:21:33.050 "dma_device_id": "system", 00:21:33.050 "dma_device_type": 1 00:21:33.050 }, 00:21:33.050 { 00:21:33.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.050 "dma_device_type": 2 00:21:33.050 } 00:21:33.050 ], 00:21:33.050 "driver_specific": {} 00:21:33.050 } 00:21:33.050 ] 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.050 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.051 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.051 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.051 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.051 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.051 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.051 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.051 "name": "Existed_Raid", 00:21:33.051 "uuid": "fc8402ea-7cf4-42a5-95a5-a7b6d71548c3", 00:21:33.051 "strip_size_kb": 64, 00:21:33.051 "state": "configuring", 00:21:33.051 "raid_level": "raid5f", 00:21:33.051 "superblock": true, 00:21:33.051 "num_base_bdevs": 3, 00:21:33.051 "num_base_bdevs_discovered": 2, 00:21:33.051 "num_base_bdevs_operational": 3, 00:21:33.051 "base_bdevs_list": [ 00:21:33.051 { 00:21:33.051 "name": "BaseBdev1", 00:21:33.051 "uuid": "c1557ee1-e1b8-4cd3-aeb9-733a24cdeff1", 00:21:33.051 "is_configured": true, 00:21:33.051 "data_offset": 2048, 00:21:33.051 "data_size": 63488 00:21:33.051 }, 00:21:33.051 { 00:21:33.051 "name": null, 00:21:33.051 "uuid": "420bbe4a-bf66-4493-ba34-bb92c6d6cfcf", 00:21:33.051 "is_configured": false, 00:21:33.051 "data_offset": 0, 00:21:33.051 "data_size": 63488 00:21:33.051 }, 00:21:33.051 { 00:21:33.051 "name": "BaseBdev3", 00:21:33.051 "uuid": "3e140ec2-7562-46ed-9b12-36b522aba858", 00:21:33.051 "is_configured": true, 00:21:33.051 "data_offset": 2048, 00:21:33.051 "data_size": 63488 00:21:33.051 } 00:21:33.051 ] 00:21:33.051 }' 00:21:33.051 09:21:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.051 09:21:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.618 09:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.618 09:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:33.618 09:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.618 09:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.618 09:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.618 09:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:33.618 09:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:33.618 09:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.618 09:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.618 [2024-10-15 09:21:17.445208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:33.618 09:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.618 09:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:33.618 09:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:33.619 09:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:33.619 09:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:33.619 09:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:33.619 09:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:33.619 09:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.619 09:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.619 09:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.619 09:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.619 09:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.619 09:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.619 09:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.619 09:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.619 09:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.619 09:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.619 "name": "Existed_Raid", 00:21:33.619 "uuid": "fc8402ea-7cf4-42a5-95a5-a7b6d71548c3", 00:21:33.619 "strip_size_kb": 64, 00:21:33.619 "state": "configuring", 00:21:33.619 "raid_level": "raid5f", 00:21:33.619 "superblock": true, 00:21:33.619 "num_base_bdevs": 3, 00:21:33.619 "num_base_bdevs_discovered": 1, 00:21:33.619 "num_base_bdevs_operational": 3, 00:21:33.619 "base_bdevs_list": [ 00:21:33.619 { 00:21:33.619 "name": "BaseBdev1", 00:21:33.619 "uuid": "c1557ee1-e1b8-4cd3-aeb9-733a24cdeff1", 00:21:33.619 "is_configured": true, 00:21:33.619 "data_offset": 2048, 00:21:33.619 "data_size": 63488 00:21:33.619 }, 00:21:33.619 { 00:21:33.619 "name": null, 00:21:33.619 "uuid": "420bbe4a-bf66-4493-ba34-bb92c6d6cfcf", 00:21:33.619 "is_configured": false, 00:21:33.619 "data_offset": 0, 00:21:33.619 "data_size": 63488 00:21:33.619 }, 00:21:33.619 { 00:21:33.619 "name": null, 00:21:33.619 "uuid": "3e140ec2-7562-46ed-9b12-36b522aba858", 00:21:33.619 "is_configured": false, 00:21:33.619 "data_offset": 0, 00:21:33.619 "data_size": 63488 00:21:33.619 } 00:21:33.619 ] 00:21:33.619 }' 00:21:33.619 09:21:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.619 09:21:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.213 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.213 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.213 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.213 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:34.213 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.213 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:34.213 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:34.213 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.213 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.213 [2024-10-15 09:21:18.061506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:34.213 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.213 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:34.213 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:34.213 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:34.214 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:34.214 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:34.214 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:34.214 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.214 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.214 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.214 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.214 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.214 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.214 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.214 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.214 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.214 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.214 "name": "Existed_Raid", 00:21:34.214 "uuid": "fc8402ea-7cf4-42a5-95a5-a7b6d71548c3", 00:21:34.214 "strip_size_kb": 64, 00:21:34.214 "state": "configuring", 00:21:34.214 "raid_level": "raid5f", 00:21:34.214 "superblock": true, 00:21:34.214 "num_base_bdevs": 3, 00:21:34.214 "num_base_bdevs_discovered": 2, 00:21:34.214 "num_base_bdevs_operational": 3, 00:21:34.214 "base_bdevs_list": [ 00:21:34.214 { 00:21:34.214 "name": "BaseBdev1", 00:21:34.214 "uuid": "c1557ee1-e1b8-4cd3-aeb9-733a24cdeff1", 00:21:34.214 "is_configured": true, 00:21:34.214 "data_offset": 2048, 00:21:34.214 "data_size": 63488 00:21:34.214 }, 00:21:34.214 { 00:21:34.214 "name": null, 00:21:34.214 "uuid": "420bbe4a-bf66-4493-ba34-bb92c6d6cfcf", 00:21:34.214 "is_configured": false, 00:21:34.214 "data_offset": 0, 00:21:34.214 "data_size": 63488 00:21:34.214 }, 00:21:34.214 { 00:21:34.214 "name": "BaseBdev3", 00:21:34.214 "uuid": "3e140ec2-7562-46ed-9b12-36b522aba858", 00:21:34.214 "is_configured": true, 00:21:34.214 "data_offset": 2048, 00:21:34.214 "data_size": 63488 00:21:34.214 } 00:21:34.214 ] 00:21:34.214 }' 00:21:34.214 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.214 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.784 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.784 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:34.784 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.784 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.784 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.784 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:34.784 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:34.784 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.784 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.784 [2024-10-15 09:21:18.617656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.041 "name": "Existed_Raid", 00:21:35.041 "uuid": "fc8402ea-7cf4-42a5-95a5-a7b6d71548c3", 00:21:35.041 "strip_size_kb": 64, 00:21:35.041 "state": "configuring", 00:21:35.041 "raid_level": "raid5f", 00:21:35.041 "superblock": true, 00:21:35.041 "num_base_bdevs": 3, 00:21:35.041 "num_base_bdevs_discovered": 1, 00:21:35.041 "num_base_bdevs_operational": 3, 00:21:35.041 "base_bdevs_list": [ 00:21:35.041 { 00:21:35.041 "name": null, 00:21:35.041 "uuid": "c1557ee1-e1b8-4cd3-aeb9-733a24cdeff1", 00:21:35.041 "is_configured": false, 00:21:35.041 "data_offset": 0, 00:21:35.041 "data_size": 63488 00:21:35.041 }, 00:21:35.041 { 00:21:35.041 "name": null, 00:21:35.041 "uuid": "420bbe4a-bf66-4493-ba34-bb92c6d6cfcf", 00:21:35.041 "is_configured": false, 00:21:35.041 "data_offset": 0, 00:21:35.041 "data_size": 63488 00:21:35.041 }, 00:21:35.041 { 00:21:35.041 "name": "BaseBdev3", 00:21:35.041 "uuid": "3e140ec2-7562-46ed-9b12-36b522aba858", 00:21:35.041 "is_configured": true, 00:21:35.041 "data_offset": 2048, 00:21:35.041 "data_size": 63488 00:21:35.041 } 00:21:35.041 ] 00:21:35.041 }' 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.041 09:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.606 [2024-10-15 09:21:19.328914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.606 "name": "Existed_Raid", 00:21:35.606 "uuid": "fc8402ea-7cf4-42a5-95a5-a7b6d71548c3", 00:21:35.606 "strip_size_kb": 64, 00:21:35.606 "state": "configuring", 00:21:35.606 "raid_level": "raid5f", 00:21:35.606 "superblock": true, 00:21:35.606 "num_base_bdevs": 3, 00:21:35.606 "num_base_bdevs_discovered": 2, 00:21:35.606 "num_base_bdevs_operational": 3, 00:21:35.606 "base_bdevs_list": [ 00:21:35.606 { 00:21:35.606 "name": null, 00:21:35.606 "uuid": "c1557ee1-e1b8-4cd3-aeb9-733a24cdeff1", 00:21:35.606 "is_configured": false, 00:21:35.606 "data_offset": 0, 00:21:35.606 "data_size": 63488 00:21:35.606 }, 00:21:35.606 { 00:21:35.606 "name": "BaseBdev2", 00:21:35.606 "uuid": "420bbe4a-bf66-4493-ba34-bb92c6d6cfcf", 00:21:35.606 "is_configured": true, 00:21:35.606 "data_offset": 2048, 00:21:35.606 "data_size": 63488 00:21:35.606 }, 00:21:35.606 { 00:21:35.606 "name": "BaseBdev3", 00:21:35.606 "uuid": "3e140ec2-7562-46ed-9b12-36b522aba858", 00:21:35.606 "is_configured": true, 00:21:35.606 "data_offset": 2048, 00:21:35.606 "data_size": 63488 00:21:35.606 } 00:21:35.606 ] 00:21:35.606 }' 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.606 09:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.172 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.172 09:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.172 09:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.172 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:36.172 09:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.172 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:36.172 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.172 09:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:36.172 09:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.172 09:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.172 09:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.172 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c1557ee1-e1b8-4cd3-aeb9-733a24cdeff1 00:21:36.172 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.172 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.172 [2024-10-15 09:21:20.062715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:36.172 [2024-10-15 09:21:20.063040] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:36.172 [2024-10-15 09:21:20.063067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:36.172 NewBaseBdev 00:21:36.172 [2024-10-15 09:21:20.063429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:36.172 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.172 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:36.172 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:21:36.172 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:36.172 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:36.172 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:36.172 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:36.172 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:36.172 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.172 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.172 [2024-10-15 09:21:20.068457] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:36.172 [2024-10-15 09:21:20.068483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:36.172 [2024-10-15 09:21:20.068848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:36.172 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.172 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:36.172 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.172 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.172 [ 00:21:36.172 { 00:21:36.172 "name": "NewBaseBdev", 00:21:36.172 "aliases": [ 00:21:36.172 "c1557ee1-e1b8-4cd3-aeb9-733a24cdeff1" 00:21:36.172 ], 00:21:36.172 "product_name": "Malloc disk", 00:21:36.172 "block_size": 512, 00:21:36.172 "num_blocks": 65536, 00:21:36.172 "uuid": "c1557ee1-e1b8-4cd3-aeb9-733a24cdeff1", 00:21:36.172 "assigned_rate_limits": { 00:21:36.172 "rw_ios_per_sec": 0, 00:21:36.172 "rw_mbytes_per_sec": 0, 00:21:36.172 "r_mbytes_per_sec": 0, 00:21:36.172 "w_mbytes_per_sec": 0 00:21:36.172 }, 00:21:36.172 "claimed": true, 00:21:36.172 "claim_type": "exclusive_write", 00:21:36.172 "zoned": false, 00:21:36.172 "supported_io_types": { 00:21:36.172 "read": true, 00:21:36.172 "write": true, 00:21:36.172 "unmap": true, 00:21:36.172 "flush": true, 00:21:36.172 "reset": true, 00:21:36.172 "nvme_admin": false, 00:21:36.172 "nvme_io": false, 00:21:36.172 "nvme_io_md": false, 00:21:36.172 "write_zeroes": true, 00:21:36.172 "zcopy": true, 00:21:36.172 "get_zone_info": false, 00:21:36.172 "zone_management": false, 00:21:36.172 "zone_append": false, 00:21:36.172 "compare": false, 00:21:36.172 "compare_and_write": false, 00:21:36.172 "abort": true, 00:21:36.444 "seek_hole": false, 00:21:36.444 "seek_data": false, 00:21:36.444 "copy": true, 00:21:36.444 "nvme_iov_md": false 00:21:36.444 }, 00:21:36.444 "memory_domains": [ 00:21:36.444 { 00:21:36.444 "dma_device_id": "system", 00:21:36.444 "dma_device_type": 1 00:21:36.444 }, 00:21:36.444 { 00:21:36.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.444 "dma_device_type": 2 00:21:36.444 } 00:21:36.444 ], 00:21:36.444 "driver_specific": {} 00:21:36.444 } 00:21:36.444 ] 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.444 "name": "Existed_Raid", 00:21:36.444 "uuid": "fc8402ea-7cf4-42a5-95a5-a7b6d71548c3", 00:21:36.444 "strip_size_kb": 64, 00:21:36.444 "state": "online", 00:21:36.444 "raid_level": "raid5f", 00:21:36.444 "superblock": true, 00:21:36.444 "num_base_bdevs": 3, 00:21:36.444 "num_base_bdevs_discovered": 3, 00:21:36.444 "num_base_bdevs_operational": 3, 00:21:36.444 "base_bdevs_list": [ 00:21:36.444 { 00:21:36.444 "name": "NewBaseBdev", 00:21:36.444 "uuid": "c1557ee1-e1b8-4cd3-aeb9-733a24cdeff1", 00:21:36.444 "is_configured": true, 00:21:36.444 "data_offset": 2048, 00:21:36.444 "data_size": 63488 00:21:36.444 }, 00:21:36.444 { 00:21:36.444 "name": "BaseBdev2", 00:21:36.444 "uuid": "420bbe4a-bf66-4493-ba34-bb92c6d6cfcf", 00:21:36.444 "is_configured": true, 00:21:36.444 "data_offset": 2048, 00:21:36.444 "data_size": 63488 00:21:36.444 }, 00:21:36.444 { 00:21:36.444 "name": "BaseBdev3", 00:21:36.444 "uuid": "3e140ec2-7562-46ed-9b12-36b522aba858", 00:21:36.444 "is_configured": true, 00:21:36.444 "data_offset": 2048, 00:21:36.444 "data_size": 63488 00:21:36.444 } 00:21:36.444 ] 00:21:36.444 }' 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.444 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.727 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:36.727 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:36.727 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:36.727 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:36.727 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:36.727 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:36.727 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:36.727 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.727 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:36.727 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.727 [2024-10-15 09:21:20.647505] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:36.985 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.985 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:36.985 "name": "Existed_Raid", 00:21:36.985 "aliases": [ 00:21:36.985 "fc8402ea-7cf4-42a5-95a5-a7b6d71548c3" 00:21:36.985 ], 00:21:36.985 "product_name": "Raid Volume", 00:21:36.985 "block_size": 512, 00:21:36.985 "num_blocks": 126976, 00:21:36.985 "uuid": "fc8402ea-7cf4-42a5-95a5-a7b6d71548c3", 00:21:36.985 "assigned_rate_limits": { 00:21:36.985 "rw_ios_per_sec": 0, 00:21:36.985 "rw_mbytes_per_sec": 0, 00:21:36.985 "r_mbytes_per_sec": 0, 00:21:36.985 "w_mbytes_per_sec": 0 00:21:36.985 }, 00:21:36.985 "claimed": false, 00:21:36.985 "zoned": false, 00:21:36.985 "supported_io_types": { 00:21:36.985 "read": true, 00:21:36.985 "write": true, 00:21:36.985 "unmap": false, 00:21:36.985 "flush": false, 00:21:36.985 "reset": true, 00:21:36.985 "nvme_admin": false, 00:21:36.985 "nvme_io": false, 00:21:36.985 "nvme_io_md": false, 00:21:36.985 "write_zeroes": true, 00:21:36.985 "zcopy": false, 00:21:36.985 "get_zone_info": false, 00:21:36.985 "zone_management": false, 00:21:36.985 "zone_append": false, 00:21:36.985 "compare": false, 00:21:36.985 "compare_and_write": false, 00:21:36.985 "abort": false, 00:21:36.985 "seek_hole": false, 00:21:36.985 "seek_data": false, 00:21:36.985 "copy": false, 00:21:36.985 "nvme_iov_md": false 00:21:36.985 }, 00:21:36.985 "driver_specific": { 00:21:36.985 "raid": { 00:21:36.985 "uuid": "fc8402ea-7cf4-42a5-95a5-a7b6d71548c3", 00:21:36.985 "strip_size_kb": 64, 00:21:36.985 "state": "online", 00:21:36.985 "raid_level": "raid5f", 00:21:36.985 "superblock": true, 00:21:36.985 "num_base_bdevs": 3, 00:21:36.985 "num_base_bdevs_discovered": 3, 00:21:36.985 "num_base_bdevs_operational": 3, 00:21:36.985 "base_bdevs_list": [ 00:21:36.985 { 00:21:36.985 "name": "NewBaseBdev", 00:21:36.985 "uuid": "c1557ee1-e1b8-4cd3-aeb9-733a24cdeff1", 00:21:36.985 "is_configured": true, 00:21:36.985 "data_offset": 2048, 00:21:36.985 "data_size": 63488 00:21:36.985 }, 00:21:36.985 { 00:21:36.985 "name": "BaseBdev2", 00:21:36.985 "uuid": "420bbe4a-bf66-4493-ba34-bb92c6d6cfcf", 00:21:36.985 "is_configured": true, 00:21:36.985 "data_offset": 2048, 00:21:36.985 "data_size": 63488 00:21:36.985 }, 00:21:36.985 { 00:21:36.985 "name": "BaseBdev3", 00:21:36.985 "uuid": "3e140ec2-7562-46ed-9b12-36b522aba858", 00:21:36.985 "is_configured": true, 00:21:36.985 "data_offset": 2048, 00:21:36.985 "data_size": 63488 00:21:36.985 } 00:21:36.985 ] 00:21:36.985 } 00:21:36.985 } 00:21:36.985 }' 00:21:36.985 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:36.985 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:36.985 BaseBdev2 00:21:36.985 BaseBdev3' 00:21:36.985 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:36.985 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:36.985 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.986 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.244 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.244 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:37.244 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:37.244 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:37.244 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.244 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.244 [2024-10-15 09:21:20.951397] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:37.244 [2024-10-15 09:21:20.951450] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:37.244 [2024-10-15 09:21:20.951589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:37.244 [2024-10-15 09:21:20.952003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:37.244 [2024-10-15 09:21:20.952028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:37.244 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.244 09:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81164 00:21:37.244 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81164 ']' 00:21:37.244 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 81164 00:21:37.244 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:21:37.244 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:37.244 09:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81164 00:21:37.244 09:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:37.244 09:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:37.244 09:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81164' 00:21:37.244 killing process with pid 81164 00:21:37.244 09:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 81164 00:21:37.244 [2024-10-15 09:21:21.004154] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:37.244 09:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 81164 00:21:37.502 [2024-10-15 09:21:21.301511] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:38.879 09:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:38.879 00:21:38.879 real 0m12.314s 00:21:38.879 user 0m20.141s 00:21:38.879 sys 0m1.864s 00:21:38.879 09:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:38.879 09:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.879 ************************************ 00:21:38.879 END TEST raid5f_state_function_test_sb 00:21:38.879 ************************************ 00:21:38.879 09:21:22 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:21:38.879 09:21:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:38.879 09:21:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:38.879 09:21:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:38.879 ************************************ 00:21:38.879 START TEST raid5f_superblock_test 00:21:38.879 ************************************ 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81796 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81796 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81796 ']' 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:38.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:38.879 09:21:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.879 [2024-10-15 09:21:22.677326] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:21:38.879 [2024-10-15 09:21:22.677993] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81796 ] 00:21:39.138 [2024-10-15 09:21:22.861173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.138 [2024-10-15 09:21:23.030342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.398 [2024-10-15 09:21:23.277127] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:39.398 [2024-10-15 09:21:23.277229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.966 malloc1 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.966 [2024-10-15 09:21:23.763250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:39.966 [2024-10-15 09:21:23.763361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.966 [2024-10-15 09:21:23.763404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:39.966 [2024-10-15 09:21:23.763421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.966 [2024-10-15 09:21:23.766685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.966 [2024-10-15 09:21:23.766736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:39.966 pt1 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:39.966 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.967 malloc2 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.967 [2024-10-15 09:21:23.822527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:39.967 [2024-10-15 09:21:23.822602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.967 [2024-10-15 09:21:23.822636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:39.967 [2024-10-15 09:21:23.822651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.967 [2024-10-15 09:21:23.825594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.967 [2024-10-15 09:21:23.825767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:39.967 pt2 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.967 malloc3 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.967 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.967 [2024-10-15 09:21:23.893858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:40.225 [2024-10-15 09:21:23.894066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.225 [2024-10-15 09:21:23.894188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:40.225 [2024-10-15 09:21:23.894419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.225 [2024-10-15 09:21:23.897330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.225 [2024-10-15 09:21:23.897496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:40.225 pt3 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.225 [2024-10-15 09:21:23.905952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:40.225 [2024-10-15 09:21:23.908557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:40.225 [2024-10-15 09:21:23.908669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:40.225 [2024-10-15 09:21:23.908924] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:40.225 [2024-10-15 09:21:23.908953] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:40.225 [2024-10-15 09:21:23.909300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:40.225 [2024-10-15 09:21:23.914614] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:40.225 [2024-10-15 09:21:23.914762] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:40.225 [2024-10-15 09:21:23.915059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.225 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.225 "name": "raid_bdev1", 00:21:40.225 "uuid": "cf237ffe-103b-48bf-8beb-fb00fb5ee6ad", 00:21:40.225 "strip_size_kb": 64, 00:21:40.225 "state": "online", 00:21:40.226 "raid_level": "raid5f", 00:21:40.226 "superblock": true, 00:21:40.226 "num_base_bdevs": 3, 00:21:40.226 "num_base_bdevs_discovered": 3, 00:21:40.226 "num_base_bdevs_operational": 3, 00:21:40.226 "base_bdevs_list": [ 00:21:40.226 { 00:21:40.226 "name": "pt1", 00:21:40.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:40.226 "is_configured": true, 00:21:40.226 "data_offset": 2048, 00:21:40.226 "data_size": 63488 00:21:40.226 }, 00:21:40.226 { 00:21:40.226 "name": "pt2", 00:21:40.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:40.226 "is_configured": true, 00:21:40.226 "data_offset": 2048, 00:21:40.226 "data_size": 63488 00:21:40.226 }, 00:21:40.226 { 00:21:40.226 "name": "pt3", 00:21:40.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:40.226 "is_configured": true, 00:21:40.226 "data_offset": 2048, 00:21:40.226 "data_size": 63488 00:21:40.226 } 00:21:40.226 ] 00:21:40.226 }' 00:21:40.226 09:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.226 09:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.792 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:40.792 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:40.792 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:40.792 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:40.792 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:40.792 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:40.792 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:40.792 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:40.792 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.792 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.792 [2024-10-15 09:21:24.433659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:40.792 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.792 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:40.792 "name": "raid_bdev1", 00:21:40.792 "aliases": [ 00:21:40.792 "cf237ffe-103b-48bf-8beb-fb00fb5ee6ad" 00:21:40.792 ], 00:21:40.792 "product_name": "Raid Volume", 00:21:40.792 "block_size": 512, 00:21:40.792 "num_blocks": 126976, 00:21:40.792 "uuid": "cf237ffe-103b-48bf-8beb-fb00fb5ee6ad", 00:21:40.792 "assigned_rate_limits": { 00:21:40.792 "rw_ios_per_sec": 0, 00:21:40.792 "rw_mbytes_per_sec": 0, 00:21:40.792 "r_mbytes_per_sec": 0, 00:21:40.792 "w_mbytes_per_sec": 0 00:21:40.792 }, 00:21:40.792 "claimed": false, 00:21:40.792 "zoned": false, 00:21:40.792 "supported_io_types": { 00:21:40.792 "read": true, 00:21:40.792 "write": true, 00:21:40.792 "unmap": false, 00:21:40.792 "flush": false, 00:21:40.792 "reset": true, 00:21:40.792 "nvme_admin": false, 00:21:40.792 "nvme_io": false, 00:21:40.792 "nvme_io_md": false, 00:21:40.792 "write_zeroes": true, 00:21:40.792 "zcopy": false, 00:21:40.792 "get_zone_info": false, 00:21:40.792 "zone_management": false, 00:21:40.792 "zone_append": false, 00:21:40.792 "compare": false, 00:21:40.792 "compare_and_write": false, 00:21:40.792 "abort": false, 00:21:40.792 "seek_hole": false, 00:21:40.792 "seek_data": false, 00:21:40.792 "copy": false, 00:21:40.792 "nvme_iov_md": false 00:21:40.792 }, 00:21:40.792 "driver_specific": { 00:21:40.792 "raid": { 00:21:40.792 "uuid": "cf237ffe-103b-48bf-8beb-fb00fb5ee6ad", 00:21:40.792 "strip_size_kb": 64, 00:21:40.792 "state": "online", 00:21:40.792 "raid_level": "raid5f", 00:21:40.792 "superblock": true, 00:21:40.792 "num_base_bdevs": 3, 00:21:40.792 "num_base_bdevs_discovered": 3, 00:21:40.792 "num_base_bdevs_operational": 3, 00:21:40.792 "base_bdevs_list": [ 00:21:40.792 { 00:21:40.792 "name": "pt1", 00:21:40.792 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:40.792 "is_configured": true, 00:21:40.792 "data_offset": 2048, 00:21:40.792 "data_size": 63488 00:21:40.792 }, 00:21:40.792 { 00:21:40.792 "name": "pt2", 00:21:40.792 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:40.792 "is_configured": true, 00:21:40.792 "data_offset": 2048, 00:21:40.792 "data_size": 63488 00:21:40.792 }, 00:21:40.793 { 00:21:40.793 "name": "pt3", 00:21:40.793 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:40.793 "is_configured": true, 00:21:40.793 "data_offset": 2048, 00:21:40.793 "data_size": 63488 00:21:40.793 } 00:21:40.793 ] 00:21:40.793 } 00:21:40.793 } 00:21:40.793 }' 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:40.793 pt2 00:21:40.793 pt3' 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.793 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.051 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:41.051 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:41.052 [2024-10-15 09:21:24.725642] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cf237ffe-103b-48bf-8beb-fb00fb5ee6ad 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cf237ffe-103b-48bf-8beb-fb00fb5ee6ad ']' 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.052 [2024-10-15 09:21:24.781536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:41.052 [2024-10-15 09:21:24.781578] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:41.052 [2024-10-15 09:21:24.781709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:41.052 [2024-10-15 09:21:24.781816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:41.052 [2024-10-15 09:21:24.781832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.052 [2024-10-15 09:21:24.933694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:41.052 [2024-10-15 09:21:24.937453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:41.052 [2024-10-15 09:21:24.937563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:41.052 [2024-10-15 09:21:24.937684] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:41.052 [2024-10-15 09:21:24.937797] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:41.052 [2024-10-15 09:21:24.937855] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:41.052 [2024-10-15 09:21:24.937900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:41.052 [2024-10-15 09:21:24.937924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:41.052 request: 00:21:41.052 { 00:21:41.052 "name": "raid_bdev1", 00:21:41.052 "raid_level": "raid5f", 00:21:41.052 "base_bdevs": [ 00:21:41.052 "malloc1", 00:21:41.052 "malloc2", 00:21:41.052 "malloc3" 00:21:41.052 ], 00:21:41.052 "strip_size_kb": 64, 00:21:41.052 "superblock": false, 00:21:41.052 "method": "bdev_raid_create", 00:21:41.052 "req_id": 1 00:21:41.052 } 00:21:41.052 Got JSON-RPC error response 00:21:41.052 response: 00:21:41.052 { 00:21:41.052 "code": -17, 00:21:41.052 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:41.052 } 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.052 09:21:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.311 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:41.311 09:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.311 [2024-10-15 09:21:25.005945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:41.311 [2024-10-15 09:21:25.006242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.311 [2024-10-15 09:21:25.006439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:41.311 [2024-10-15 09:21:25.006586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.311 [2024-10-15 09:21:25.009734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.311 [2024-10-15 09:21:25.009893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:41.311 [2024-10-15 09:21:25.010139] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:41.311 [2024-10-15 09:21:25.010344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:41.311 pt1 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.311 "name": "raid_bdev1", 00:21:41.311 "uuid": "cf237ffe-103b-48bf-8beb-fb00fb5ee6ad", 00:21:41.311 "strip_size_kb": 64, 00:21:41.311 "state": "configuring", 00:21:41.311 "raid_level": "raid5f", 00:21:41.311 "superblock": true, 00:21:41.311 "num_base_bdevs": 3, 00:21:41.311 "num_base_bdevs_discovered": 1, 00:21:41.311 "num_base_bdevs_operational": 3, 00:21:41.311 "base_bdevs_list": [ 00:21:41.311 { 00:21:41.311 "name": "pt1", 00:21:41.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:41.311 "is_configured": true, 00:21:41.311 "data_offset": 2048, 00:21:41.311 "data_size": 63488 00:21:41.311 }, 00:21:41.311 { 00:21:41.311 "name": null, 00:21:41.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.311 "is_configured": false, 00:21:41.311 "data_offset": 2048, 00:21:41.311 "data_size": 63488 00:21:41.311 }, 00:21:41.311 { 00:21:41.311 "name": null, 00:21:41.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:41.311 "is_configured": false, 00:21:41.311 "data_offset": 2048, 00:21:41.311 "data_size": 63488 00:21:41.311 } 00:21:41.311 ] 00:21:41.311 }' 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.311 09:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.879 [2024-10-15 09:21:25.546419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:41.879 [2024-10-15 09:21:25.546509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.879 [2024-10-15 09:21:25.546549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:21:41.879 [2024-10-15 09:21:25.546566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.879 [2024-10-15 09:21:25.547232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.879 [2024-10-15 09:21:25.547268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:41.879 [2024-10-15 09:21:25.547401] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:41.879 [2024-10-15 09:21:25.547436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:41.879 pt2 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.879 [2024-10-15 09:21:25.554395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.879 "name": "raid_bdev1", 00:21:41.879 "uuid": "cf237ffe-103b-48bf-8beb-fb00fb5ee6ad", 00:21:41.879 "strip_size_kb": 64, 00:21:41.879 "state": "configuring", 00:21:41.879 "raid_level": "raid5f", 00:21:41.879 "superblock": true, 00:21:41.879 "num_base_bdevs": 3, 00:21:41.879 "num_base_bdevs_discovered": 1, 00:21:41.879 "num_base_bdevs_operational": 3, 00:21:41.879 "base_bdevs_list": [ 00:21:41.879 { 00:21:41.879 "name": "pt1", 00:21:41.879 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:41.879 "is_configured": true, 00:21:41.879 "data_offset": 2048, 00:21:41.879 "data_size": 63488 00:21:41.879 }, 00:21:41.879 { 00:21:41.879 "name": null, 00:21:41.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.879 "is_configured": false, 00:21:41.879 "data_offset": 0, 00:21:41.879 "data_size": 63488 00:21:41.879 }, 00:21:41.879 { 00:21:41.879 "name": null, 00:21:41.879 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:41.879 "is_configured": false, 00:21:41.879 "data_offset": 2048, 00:21:41.879 "data_size": 63488 00:21:41.879 } 00:21:41.879 ] 00:21:41.879 }' 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.879 09:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.447 [2024-10-15 09:21:26.074530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:42.447 [2024-10-15 09:21:26.074634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.447 [2024-10-15 09:21:26.074665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:42.447 [2024-10-15 09:21:26.074684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.447 [2024-10-15 09:21:26.075341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.447 [2024-10-15 09:21:26.075374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:42.447 [2024-10-15 09:21:26.075489] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:42.447 [2024-10-15 09:21:26.075528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:42.447 pt2 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.447 [2024-10-15 09:21:26.086582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:42.447 [2024-10-15 09:21:26.086666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.447 [2024-10-15 09:21:26.086695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:42.447 [2024-10-15 09:21:26.086714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.447 [2024-10-15 09:21:26.087338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.447 [2024-10-15 09:21:26.087385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:42.447 [2024-10-15 09:21:26.087496] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:42.447 [2024-10-15 09:21:26.087536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:42.447 [2024-10-15 09:21:26.087732] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:42.447 [2024-10-15 09:21:26.087760] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:42.447 [2024-10-15 09:21:26.088092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:42.447 [2024-10-15 09:21:26.093083] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:42.447 [2024-10-15 09:21:26.093258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:42.447 [2024-10-15 09:21:26.093536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.447 pt3 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.447 "name": "raid_bdev1", 00:21:42.447 "uuid": "cf237ffe-103b-48bf-8beb-fb00fb5ee6ad", 00:21:42.447 "strip_size_kb": 64, 00:21:42.447 "state": "online", 00:21:42.447 "raid_level": "raid5f", 00:21:42.447 "superblock": true, 00:21:42.447 "num_base_bdevs": 3, 00:21:42.447 "num_base_bdevs_discovered": 3, 00:21:42.447 "num_base_bdevs_operational": 3, 00:21:42.447 "base_bdevs_list": [ 00:21:42.447 { 00:21:42.447 "name": "pt1", 00:21:42.447 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:42.447 "is_configured": true, 00:21:42.447 "data_offset": 2048, 00:21:42.447 "data_size": 63488 00:21:42.447 }, 00:21:42.447 { 00:21:42.447 "name": "pt2", 00:21:42.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:42.447 "is_configured": true, 00:21:42.447 "data_offset": 2048, 00:21:42.447 "data_size": 63488 00:21:42.447 }, 00:21:42.447 { 00:21:42.447 "name": "pt3", 00:21:42.447 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:42.447 "is_configured": true, 00:21:42.447 "data_offset": 2048, 00:21:42.447 "data_size": 63488 00:21:42.447 } 00:21:42.447 ] 00:21:42.447 }' 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.447 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.763 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:42.763 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:42.763 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:42.763 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:42.763 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:42.763 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:42.763 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:42.763 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.763 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.763 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:42.763 [2024-10-15 09:21:26.628018] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:42.763 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.022 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:43.022 "name": "raid_bdev1", 00:21:43.022 "aliases": [ 00:21:43.022 "cf237ffe-103b-48bf-8beb-fb00fb5ee6ad" 00:21:43.022 ], 00:21:43.022 "product_name": "Raid Volume", 00:21:43.022 "block_size": 512, 00:21:43.022 "num_blocks": 126976, 00:21:43.022 "uuid": "cf237ffe-103b-48bf-8beb-fb00fb5ee6ad", 00:21:43.022 "assigned_rate_limits": { 00:21:43.022 "rw_ios_per_sec": 0, 00:21:43.022 "rw_mbytes_per_sec": 0, 00:21:43.022 "r_mbytes_per_sec": 0, 00:21:43.022 "w_mbytes_per_sec": 0 00:21:43.022 }, 00:21:43.022 "claimed": false, 00:21:43.022 "zoned": false, 00:21:43.022 "supported_io_types": { 00:21:43.022 "read": true, 00:21:43.022 "write": true, 00:21:43.022 "unmap": false, 00:21:43.022 "flush": false, 00:21:43.022 "reset": true, 00:21:43.022 "nvme_admin": false, 00:21:43.022 "nvme_io": false, 00:21:43.022 "nvme_io_md": false, 00:21:43.022 "write_zeroes": true, 00:21:43.022 "zcopy": false, 00:21:43.022 "get_zone_info": false, 00:21:43.022 "zone_management": false, 00:21:43.022 "zone_append": false, 00:21:43.022 "compare": false, 00:21:43.022 "compare_and_write": false, 00:21:43.022 "abort": false, 00:21:43.022 "seek_hole": false, 00:21:43.022 "seek_data": false, 00:21:43.022 "copy": false, 00:21:43.022 "nvme_iov_md": false 00:21:43.022 }, 00:21:43.022 "driver_specific": { 00:21:43.023 "raid": { 00:21:43.023 "uuid": "cf237ffe-103b-48bf-8beb-fb00fb5ee6ad", 00:21:43.023 "strip_size_kb": 64, 00:21:43.023 "state": "online", 00:21:43.023 "raid_level": "raid5f", 00:21:43.023 "superblock": true, 00:21:43.023 "num_base_bdevs": 3, 00:21:43.023 "num_base_bdevs_discovered": 3, 00:21:43.023 "num_base_bdevs_operational": 3, 00:21:43.023 "base_bdevs_list": [ 00:21:43.023 { 00:21:43.023 "name": "pt1", 00:21:43.023 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:43.023 "is_configured": true, 00:21:43.023 "data_offset": 2048, 00:21:43.023 "data_size": 63488 00:21:43.023 }, 00:21:43.023 { 00:21:43.023 "name": "pt2", 00:21:43.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.023 "is_configured": true, 00:21:43.023 "data_offset": 2048, 00:21:43.023 "data_size": 63488 00:21:43.023 }, 00:21:43.023 { 00:21:43.023 "name": "pt3", 00:21:43.023 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:43.023 "is_configured": true, 00:21:43.023 "data_offset": 2048, 00:21:43.023 "data_size": 63488 00:21:43.023 } 00:21:43.023 ] 00:21:43.023 } 00:21:43.023 } 00:21:43.023 }' 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:43.023 pt2 00:21:43.023 pt3' 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.023 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.281 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:43.281 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:43.281 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:43.281 09:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:43.281 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.281 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.281 [2024-10-15 09:21:26.956036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:43.281 09:21:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cf237ffe-103b-48bf-8beb-fb00fb5ee6ad '!=' cf237ffe-103b-48bf-8beb-fb00fb5ee6ad ']' 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.281 [2024-10-15 09:21:27.007875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.281 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.281 "name": "raid_bdev1", 00:21:43.281 "uuid": "cf237ffe-103b-48bf-8beb-fb00fb5ee6ad", 00:21:43.281 "strip_size_kb": 64, 00:21:43.281 "state": "online", 00:21:43.281 "raid_level": "raid5f", 00:21:43.281 "superblock": true, 00:21:43.281 "num_base_bdevs": 3, 00:21:43.281 "num_base_bdevs_discovered": 2, 00:21:43.281 "num_base_bdevs_operational": 2, 00:21:43.281 "base_bdevs_list": [ 00:21:43.281 { 00:21:43.281 "name": null, 00:21:43.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.281 "is_configured": false, 00:21:43.281 "data_offset": 0, 00:21:43.281 "data_size": 63488 00:21:43.281 }, 00:21:43.281 { 00:21:43.281 "name": "pt2", 00:21:43.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.281 "is_configured": true, 00:21:43.282 "data_offset": 2048, 00:21:43.282 "data_size": 63488 00:21:43.282 }, 00:21:43.282 { 00:21:43.282 "name": "pt3", 00:21:43.282 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:43.282 "is_configured": true, 00:21:43.282 "data_offset": 2048, 00:21:43.282 "data_size": 63488 00:21:43.282 } 00:21:43.282 ] 00:21:43.282 }' 00:21:43.282 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.282 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.849 [2024-10-15 09:21:27.519953] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.849 [2024-10-15 09:21:27.519994] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:43.849 [2024-10-15 09:21:27.520111] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:43.849 [2024-10-15 09:21:27.520214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:43.849 [2024-10-15 09:21:27.520239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.849 [2024-10-15 09:21:27.607912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:43.849 [2024-10-15 09:21:27.607990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.849 [2024-10-15 09:21:27.608018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:43.849 [2024-10-15 09:21:27.608036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.849 [2024-10-15 09:21:27.611065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.849 [2024-10-15 09:21:27.611263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:43.849 [2024-10-15 09:21:27.611386] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:43.849 [2024-10-15 09:21:27.611457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:43.849 pt2 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.849 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.849 "name": "raid_bdev1", 00:21:43.849 "uuid": "cf237ffe-103b-48bf-8beb-fb00fb5ee6ad", 00:21:43.849 "strip_size_kb": 64, 00:21:43.850 "state": "configuring", 00:21:43.850 "raid_level": "raid5f", 00:21:43.850 "superblock": true, 00:21:43.850 "num_base_bdevs": 3, 00:21:43.850 "num_base_bdevs_discovered": 1, 00:21:43.850 "num_base_bdevs_operational": 2, 00:21:43.850 "base_bdevs_list": [ 00:21:43.850 { 00:21:43.850 "name": null, 00:21:43.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.850 "is_configured": false, 00:21:43.850 "data_offset": 2048, 00:21:43.850 "data_size": 63488 00:21:43.850 }, 00:21:43.850 { 00:21:43.850 "name": "pt2", 00:21:43.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.850 "is_configured": true, 00:21:43.850 "data_offset": 2048, 00:21:43.850 "data_size": 63488 00:21:43.850 }, 00:21:43.850 { 00:21:43.850 "name": null, 00:21:43.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:43.850 "is_configured": false, 00:21:43.850 "data_offset": 2048, 00:21:43.850 "data_size": 63488 00:21:43.850 } 00:21:43.850 ] 00:21:43.850 }' 00:21:43.850 09:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.850 09:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.417 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:21:44.417 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:44.417 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:21:44.417 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:44.417 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.417 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.417 [2024-10-15 09:21:28.164109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:44.417 [2024-10-15 09:21:28.164224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.417 [2024-10-15 09:21:28.164262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:44.417 [2024-10-15 09:21:28.164283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.417 [2024-10-15 09:21:28.164963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.417 [2024-10-15 09:21:28.164993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:44.417 [2024-10-15 09:21:28.165109] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:44.417 [2024-10-15 09:21:28.165190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:44.418 [2024-10-15 09:21:28.165363] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:44.418 [2024-10-15 09:21:28.165384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:44.418 [2024-10-15 09:21:28.165683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:44.418 [2024-10-15 09:21:28.170893] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:44.418 [2024-10-15 09:21:28.171052] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:44.418 [2024-10-15 09:21:28.171737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.418 pt3 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.418 "name": "raid_bdev1", 00:21:44.418 "uuid": "cf237ffe-103b-48bf-8beb-fb00fb5ee6ad", 00:21:44.418 "strip_size_kb": 64, 00:21:44.418 "state": "online", 00:21:44.418 "raid_level": "raid5f", 00:21:44.418 "superblock": true, 00:21:44.418 "num_base_bdevs": 3, 00:21:44.418 "num_base_bdevs_discovered": 2, 00:21:44.418 "num_base_bdevs_operational": 2, 00:21:44.418 "base_bdevs_list": [ 00:21:44.418 { 00:21:44.418 "name": null, 00:21:44.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.418 "is_configured": false, 00:21:44.418 "data_offset": 2048, 00:21:44.418 "data_size": 63488 00:21:44.418 }, 00:21:44.418 { 00:21:44.418 "name": "pt2", 00:21:44.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:44.418 "is_configured": true, 00:21:44.418 "data_offset": 2048, 00:21:44.418 "data_size": 63488 00:21:44.418 }, 00:21:44.418 { 00:21:44.418 "name": "pt3", 00:21:44.418 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:44.418 "is_configured": true, 00:21:44.418 "data_offset": 2048, 00:21:44.418 "data_size": 63488 00:21:44.418 } 00:21:44.418 ] 00:21:44.418 }' 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.418 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.986 [2024-10-15 09:21:28.685995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:44.986 [2024-10-15 09:21:28.686047] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:44.986 [2024-10-15 09:21:28.686193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:44.986 [2024-10-15 09:21:28.686299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:44.986 [2024-10-15 09:21:28.686327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.986 [2024-10-15 09:21:28.757995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:44.986 [2024-10-15 09:21:28.758222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.986 [2024-10-15 09:21:28.758264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:44.986 [2024-10-15 09:21:28.758280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.986 [2024-10-15 09:21:28.761380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.986 [2024-10-15 09:21:28.761540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:44.986 [2024-10-15 09:21:28.761675] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:44.986 [2024-10-15 09:21:28.761742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:44.986 [2024-10-15 09:21:28.761930] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:44.986 [2024-10-15 09:21:28.761948] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:44.986 [2024-10-15 09:21:28.761971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:44.986 [2024-10-15 09:21:28.762046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:44.986 pt1 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.986 "name": "raid_bdev1", 00:21:44.986 "uuid": "cf237ffe-103b-48bf-8beb-fb00fb5ee6ad", 00:21:44.986 "strip_size_kb": 64, 00:21:44.986 "state": "configuring", 00:21:44.986 "raid_level": "raid5f", 00:21:44.986 "superblock": true, 00:21:44.986 "num_base_bdevs": 3, 00:21:44.986 "num_base_bdevs_discovered": 1, 00:21:44.986 "num_base_bdevs_operational": 2, 00:21:44.986 "base_bdevs_list": [ 00:21:44.986 { 00:21:44.986 "name": null, 00:21:44.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.986 "is_configured": false, 00:21:44.986 "data_offset": 2048, 00:21:44.986 "data_size": 63488 00:21:44.986 }, 00:21:44.986 { 00:21:44.986 "name": "pt2", 00:21:44.986 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:44.986 "is_configured": true, 00:21:44.986 "data_offset": 2048, 00:21:44.986 "data_size": 63488 00:21:44.986 }, 00:21:44.986 { 00:21:44.986 "name": null, 00:21:44.986 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:44.986 "is_configured": false, 00:21:44.986 "data_offset": 2048, 00:21:44.986 "data_size": 63488 00:21:44.986 } 00:21:44.986 ] 00:21:44.986 }' 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.986 09:21:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.554 [2024-10-15 09:21:29.338273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:45.554 [2024-10-15 09:21:29.338372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.554 [2024-10-15 09:21:29.338410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:45.554 [2024-10-15 09:21:29.338427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.554 [2024-10-15 09:21:29.339071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.554 [2024-10-15 09:21:29.339103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:45.554 [2024-10-15 09:21:29.339237] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:45.554 [2024-10-15 09:21:29.339273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:45.554 [2024-10-15 09:21:29.339445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:45.554 [2024-10-15 09:21:29.339510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:45.554 [2024-10-15 09:21:29.339861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:45.554 [2024-10-15 09:21:29.345002] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:45.554 [2024-10-15 09:21:29.345034] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:45.554 [2024-10-15 09:21:29.345361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.554 pt3 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:45.554 "name": "raid_bdev1", 00:21:45.554 "uuid": "cf237ffe-103b-48bf-8beb-fb00fb5ee6ad", 00:21:45.554 "strip_size_kb": 64, 00:21:45.554 "state": "online", 00:21:45.554 "raid_level": "raid5f", 00:21:45.554 "superblock": true, 00:21:45.554 "num_base_bdevs": 3, 00:21:45.554 "num_base_bdevs_discovered": 2, 00:21:45.554 "num_base_bdevs_operational": 2, 00:21:45.554 "base_bdevs_list": [ 00:21:45.554 { 00:21:45.554 "name": null, 00:21:45.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.554 "is_configured": false, 00:21:45.554 "data_offset": 2048, 00:21:45.554 "data_size": 63488 00:21:45.554 }, 00:21:45.554 { 00:21:45.554 "name": "pt2", 00:21:45.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:45.554 "is_configured": true, 00:21:45.554 "data_offset": 2048, 00:21:45.554 "data_size": 63488 00:21:45.554 }, 00:21:45.554 { 00:21:45.554 "name": "pt3", 00:21:45.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:45.554 "is_configured": true, 00:21:45.554 "data_offset": 2048, 00:21:45.554 "data_size": 63488 00:21:45.554 } 00:21:45.554 ] 00:21:45.554 }' 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.554 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.121 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:46.121 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.121 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.121 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:46.121 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.121 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:46.121 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:46.121 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:46.121 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.121 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.121 [2024-10-15 09:21:29.935774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:46.121 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.121 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' cf237ffe-103b-48bf-8beb-fb00fb5ee6ad '!=' cf237ffe-103b-48bf-8beb-fb00fb5ee6ad ']' 00:21:46.121 09:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81796 00:21:46.121 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81796 ']' 00:21:46.121 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81796 00:21:46.121 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:21:46.121 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:46.121 09:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81796 00:21:46.121 09:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:46.121 09:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:46.121 09:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81796' 00:21:46.121 killing process with pid 81796 00:21:46.121 09:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 81796 00:21:46.121 [2024-10-15 09:21:30.009694] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:46.121 09:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 81796 00:21:46.121 [2024-10-15 09:21:30.009964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.121 [2024-10-15 09:21:30.010184] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.121 [2024-10-15 09:21:30.010342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:46.379 [2024-10-15 09:21:30.302307] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:47.753 09:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:47.753 00:21:47.753 real 0m8.848s 00:21:47.753 user 0m14.273s 00:21:47.753 sys 0m1.384s 00:21:47.753 ************************************ 00:21:47.753 END TEST raid5f_superblock_test 00:21:47.753 ************************************ 00:21:47.753 09:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:47.753 09:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.753 09:21:31 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:21:47.753 09:21:31 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:21:47.753 09:21:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:21:47.753 09:21:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:47.753 09:21:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:47.753 ************************************ 00:21:47.753 START TEST raid5f_rebuild_test 00:21:47.753 ************************************ 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82245 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82245 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 82245 ']' 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:47.753 09:21:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.753 [2024-10-15 09:21:31.592627] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:21:47.753 [2024-10-15 09:21:31.593352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82245 ] 00:21:47.753 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:47.753 Zero copy mechanism will not be used. 00:21:48.012 [2024-10-15 09:21:31.771341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.012 [2024-10-15 09:21:31.917871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.270 [2024-10-15 09:21:32.139891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:48.270 [2024-10-15 09:21:32.139965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:48.844 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:48.844 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:21:48.844 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:48.844 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:48.844 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.844 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.844 BaseBdev1_malloc 00:21:48.844 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.844 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:48.844 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.844 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.844 [2024-10-15 09:21:32.663085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:48.844 [2024-10-15 09:21:32.663208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.845 [2024-10-15 09:21:32.663250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:48.845 [2024-10-15 09:21:32.663271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.845 [2024-10-15 09:21:32.666345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.845 [2024-10-15 09:21:32.666533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:48.845 BaseBdev1 00:21:48.845 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.845 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:48.845 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:48.845 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.845 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.845 BaseBdev2_malloc 00:21:48.845 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.845 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:48.845 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.845 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.845 [2024-10-15 09:21:32.723321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:48.845 [2024-10-15 09:21:32.723410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.845 [2024-10-15 09:21:32.723443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:48.845 [2024-10-15 09:21:32.723463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.845 [2024-10-15 09:21:32.726393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.845 [2024-10-15 09:21:32.726443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:48.845 BaseBdev2 00:21:48.845 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.845 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:48.845 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:48.845 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.845 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.122 BaseBdev3_malloc 00:21:49.122 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.122 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:49.122 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.122 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.122 [2024-10-15 09:21:32.797353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:49.122 [2024-10-15 09:21:32.797443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.122 [2024-10-15 09:21:32.797481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:49.122 [2024-10-15 09:21:32.797507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.122 [2024-10-15 09:21:32.800743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.122 [2024-10-15 09:21:32.800804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:49.122 BaseBdev3 00:21:49.122 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.122 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:49.122 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.122 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.122 spare_malloc 00:21:49.122 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.122 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:49.122 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.122 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.122 spare_delay 00:21:49.122 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.122 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:49.122 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.122 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.122 [2024-10-15 09:21:32.873027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:49.122 [2024-10-15 09:21:32.873107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.122 [2024-10-15 09:21:32.873150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:21:49.122 [2024-10-15 09:21:32.873172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.122 [2024-10-15 09:21:32.876197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.122 [2024-10-15 09:21:32.876252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:49.122 spare 00:21:49.122 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.123 [2024-10-15 09:21:32.885210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:49.123 [2024-10-15 09:21:32.887775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:49.123 [2024-10-15 09:21:32.888022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:49.123 [2024-10-15 09:21:32.888187] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:49.123 [2024-10-15 09:21:32.888207] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:49.123 [2024-10-15 09:21:32.888577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:49.123 [2024-10-15 09:21:32.893783] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:49.123 [2024-10-15 09:21:32.893839] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:49.123 [2024-10-15 09:21:32.894109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:49.123 "name": "raid_bdev1", 00:21:49.123 "uuid": "de92dbc4-f7e2-422a-ac2b-b44be7929573", 00:21:49.123 "strip_size_kb": 64, 00:21:49.123 "state": "online", 00:21:49.123 "raid_level": "raid5f", 00:21:49.123 "superblock": false, 00:21:49.123 "num_base_bdevs": 3, 00:21:49.123 "num_base_bdevs_discovered": 3, 00:21:49.123 "num_base_bdevs_operational": 3, 00:21:49.123 "base_bdevs_list": [ 00:21:49.123 { 00:21:49.123 "name": "BaseBdev1", 00:21:49.123 "uuid": "ba0dcdbe-2dc8-5a71-b649-4dc8d83682a9", 00:21:49.123 "is_configured": true, 00:21:49.123 "data_offset": 0, 00:21:49.123 "data_size": 65536 00:21:49.123 }, 00:21:49.123 { 00:21:49.123 "name": "BaseBdev2", 00:21:49.123 "uuid": "009eba86-505b-5159-aaee-ac96d89e5f8a", 00:21:49.123 "is_configured": true, 00:21:49.123 "data_offset": 0, 00:21:49.123 "data_size": 65536 00:21:49.123 }, 00:21:49.123 { 00:21:49.123 "name": "BaseBdev3", 00:21:49.123 "uuid": "42a85a06-55ca-5682-85ff-b613c168ca38", 00:21:49.123 "is_configured": true, 00:21:49.123 "data_offset": 0, 00:21:49.123 "data_size": 65536 00:21:49.123 } 00:21:49.123 ] 00:21:49.123 }' 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:49.123 09:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.760 [2024-10-15 09:21:33.436659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:49.760 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:50.019 [2024-10-15 09:21:33.788588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:50.019 /dev/nbd0 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:50.019 1+0 records in 00:21:50.019 1+0 records out 00:21:50.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489414 s, 8.4 MB/s 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:21:50.019 09:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:21:50.587 512+0 records in 00:21:50.587 512+0 records out 00:21:50.587 67108864 bytes (67 MB, 64 MiB) copied, 0.448424 s, 150 MB/s 00:21:50.587 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:50.587 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:50.587 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:50.587 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:50.587 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:50.587 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:50.587 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:50.846 [2024-10-15 09:21:34.585890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.846 [2024-10-15 09:21:34.604066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.846 "name": "raid_bdev1", 00:21:50.846 "uuid": "de92dbc4-f7e2-422a-ac2b-b44be7929573", 00:21:50.846 "strip_size_kb": 64, 00:21:50.846 "state": "online", 00:21:50.846 "raid_level": "raid5f", 00:21:50.846 "superblock": false, 00:21:50.846 "num_base_bdevs": 3, 00:21:50.846 "num_base_bdevs_discovered": 2, 00:21:50.846 "num_base_bdevs_operational": 2, 00:21:50.846 "base_bdevs_list": [ 00:21:50.846 { 00:21:50.846 "name": null, 00:21:50.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.846 "is_configured": false, 00:21:50.846 "data_offset": 0, 00:21:50.846 "data_size": 65536 00:21:50.846 }, 00:21:50.846 { 00:21:50.846 "name": "BaseBdev2", 00:21:50.846 "uuid": "009eba86-505b-5159-aaee-ac96d89e5f8a", 00:21:50.846 "is_configured": true, 00:21:50.846 "data_offset": 0, 00:21:50.846 "data_size": 65536 00:21:50.846 }, 00:21:50.846 { 00:21:50.846 "name": "BaseBdev3", 00:21:50.846 "uuid": "42a85a06-55ca-5682-85ff-b613c168ca38", 00:21:50.846 "is_configured": true, 00:21:50.846 "data_offset": 0, 00:21:50.846 "data_size": 65536 00:21:50.846 } 00:21:50.846 ] 00:21:50.846 }' 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.846 09:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.414 09:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:51.414 09:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.414 09:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.414 [2024-10-15 09:21:35.104239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:51.414 [2024-10-15 09:21:35.120267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:21:51.414 09:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.414 09:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:51.414 [2024-10-15 09:21:35.127945] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:52.411 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.411 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:52.411 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:52.411 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:52.411 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:52.411 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.411 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.411 09:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.411 09:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.412 09:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.412 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:52.412 "name": "raid_bdev1", 00:21:52.412 "uuid": "de92dbc4-f7e2-422a-ac2b-b44be7929573", 00:21:52.412 "strip_size_kb": 64, 00:21:52.412 "state": "online", 00:21:52.412 "raid_level": "raid5f", 00:21:52.412 "superblock": false, 00:21:52.412 "num_base_bdevs": 3, 00:21:52.412 "num_base_bdevs_discovered": 3, 00:21:52.412 "num_base_bdevs_operational": 3, 00:21:52.412 "process": { 00:21:52.412 "type": "rebuild", 00:21:52.412 "target": "spare", 00:21:52.412 "progress": { 00:21:52.412 "blocks": 18432, 00:21:52.412 "percent": 14 00:21:52.412 } 00:21:52.412 }, 00:21:52.412 "base_bdevs_list": [ 00:21:52.412 { 00:21:52.412 "name": "spare", 00:21:52.412 "uuid": "b5e0c876-9cfe-59bd-b4e0-b3aeb1a8d528", 00:21:52.412 "is_configured": true, 00:21:52.412 "data_offset": 0, 00:21:52.412 "data_size": 65536 00:21:52.412 }, 00:21:52.412 { 00:21:52.412 "name": "BaseBdev2", 00:21:52.412 "uuid": "009eba86-505b-5159-aaee-ac96d89e5f8a", 00:21:52.412 "is_configured": true, 00:21:52.412 "data_offset": 0, 00:21:52.412 "data_size": 65536 00:21:52.412 }, 00:21:52.412 { 00:21:52.412 "name": "BaseBdev3", 00:21:52.412 "uuid": "42a85a06-55ca-5682-85ff-b613c168ca38", 00:21:52.412 "is_configured": true, 00:21:52.412 "data_offset": 0, 00:21:52.412 "data_size": 65536 00:21:52.412 } 00:21:52.412 ] 00:21:52.412 }' 00:21:52.412 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:52.412 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.412 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:52.412 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.412 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:52.412 09:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.412 09:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.412 [2024-10-15 09:21:36.273543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:52.686 [2024-10-15 09:21:36.345633] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:52.686 [2024-10-15 09:21:36.345992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.686 [2024-10-15 09:21:36.346032] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:52.686 [2024-10-15 09:21:36.346047] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:52.686 09:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.686 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:52.686 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:52.686 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:52.686 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:52.686 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:52.686 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:52.686 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:52.686 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:52.686 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:52.686 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:52.686 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.686 09:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.686 09:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.686 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.686 09:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.686 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.686 "name": "raid_bdev1", 00:21:52.686 "uuid": "de92dbc4-f7e2-422a-ac2b-b44be7929573", 00:21:52.686 "strip_size_kb": 64, 00:21:52.686 "state": "online", 00:21:52.686 "raid_level": "raid5f", 00:21:52.686 "superblock": false, 00:21:52.686 "num_base_bdevs": 3, 00:21:52.686 "num_base_bdevs_discovered": 2, 00:21:52.686 "num_base_bdevs_operational": 2, 00:21:52.686 "base_bdevs_list": [ 00:21:52.686 { 00:21:52.686 "name": null, 00:21:52.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.686 "is_configured": false, 00:21:52.686 "data_offset": 0, 00:21:52.686 "data_size": 65536 00:21:52.686 }, 00:21:52.687 { 00:21:52.687 "name": "BaseBdev2", 00:21:52.687 "uuid": "009eba86-505b-5159-aaee-ac96d89e5f8a", 00:21:52.687 "is_configured": true, 00:21:52.687 "data_offset": 0, 00:21:52.687 "data_size": 65536 00:21:52.687 }, 00:21:52.687 { 00:21:52.687 "name": "BaseBdev3", 00:21:52.687 "uuid": "42a85a06-55ca-5682-85ff-b613c168ca38", 00:21:52.687 "is_configured": true, 00:21:52.687 "data_offset": 0, 00:21:52.687 "data_size": 65536 00:21:52.687 } 00:21:52.687 ] 00:21:52.687 }' 00:21:52.687 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.687 09:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.255 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:53.255 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.255 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:53.255 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:53.255 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.255 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.255 09:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.255 09:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.255 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.255 09:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.255 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.255 "name": "raid_bdev1", 00:21:53.255 "uuid": "de92dbc4-f7e2-422a-ac2b-b44be7929573", 00:21:53.255 "strip_size_kb": 64, 00:21:53.255 "state": "online", 00:21:53.255 "raid_level": "raid5f", 00:21:53.255 "superblock": false, 00:21:53.255 "num_base_bdevs": 3, 00:21:53.255 "num_base_bdevs_discovered": 2, 00:21:53.255 "num_base_bdevs_operational": 2, 00:21:53.255 "base_bdevs_list": [ 00:21:53.255 { 00:21:53.255 "name": null, 00:21:53.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.255 "is_configured": false, 00:21:53.255 "data_offset": 0, 00:21:53.255 "data_size": 65536 00:21:53.255 }, 00:21:53.255 { 00:21:53.255 "name": "BaseBdev2", 00:21:53.255 "uuid": "009eba86-505b-5159-aaee-ac96d89e5f8a", 00:21:53.255 "is_configured": true, 00:21:53.255 "data_offset": 0, 00:21:53.255 "data_size": 65536 00:21:53.255 }, 00:21:53.255 { 00:21:53.255 "name": "BaseBdev3", 00:21:53.255 "uuid": "42a85a06-55ca-5682-85ff-b613c168ca38", 00:21:53.255 "is_configured": true, 00:21:53.255 "data_offset": 0, 00:21:53.255 "data_size": 65536 00:21:53.255 } 00:21:53.255 ] 00:21:53.255 }' 00:21:53.255 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.255 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:53.255 09:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:53.255 09:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:53.255 09:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:53.255 09:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.255 09:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.255 [2024-10-15 09:21:37.043447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:53.255 [2024-10-15 09:21:37.059090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:21:53.255 09:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.255 09:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:53.255 [2024-10-15 09:21:37.066724] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:54.192 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.192 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.192 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:54.192 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:54.192 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.192 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.193 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.193 09:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.193 09:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.193 09:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.451 "name": "raid_bdev1", 00:21:54.451 "uuid": "de92dbc4-f7e2-422a-ac2b-b44be7929573", 00:21:54.451 "strip_size_kb": 64, 00:21:54.451 "state": "online", 00:21:54.451 "raid_level": "raid5f", 00:21:54.451 "superblock": false, 00:21:54.451 "num_base_bdevs": 3, 00:21:54.451 "num_base_bdevs_discovered": 3, 00:21:54.451 "num_base_bdevs_operational": 3, 00:21:54.451 "process": { 00:21:54.451 "type": "rebuild", 00:21:54.451 "target": "spare", 00:21:54.451 "progress": { 00:21:54.451 "blocks": 18432, 00:21:54.451 "percent": 14 00:21:54.451 } 00:21:54.451 }, 00:21:54.451 "base_bdevs_list": [ 00:21:54.451 { 00:21:54.451 "name": "spare", 00:21:54.451 "uuid": "b5e0c876-9cfe-59bd-b4e0-b3aeb1a8d528", 00:21:54.451 "is_configured": true, 00:21:54.451 "data_offset": 0, 00:21:54.451 "data_size": 65536 00:21:54.451 }, 00:21:54.451 { 00:21:54.451 "name": "BaseBdev2", 00:21:54.451 "uuid": "009eba86-505b-5159-aaee-ac96d89e5f8a", 00:21:54.451 "is_configured": true, 00:21:54.451 "data_offset": 0, 00:21:54.451 "data_size": 65536 00:21:54.451 }, 00:21:54.451 { 00:21:54.451 "name": "BaseBdev3", 00:21:54.451 "uuid": "42a85a06-55ca-5682-85ff-b613c168ca38", 00:21:54.451 "is_configured": true, 00:21:54.451 "data_offset": 0, 00:21:54.451 "data_size": 65536 00:21:54.451 } 00:21:54.451 ] 00:21:54.451 }' 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=609 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.451 "name": "raid_bdev1", 00:21:54.451 "uuid": "de92dbc4-f7e2-422a-ac2b-b44be7929573", 00:21:54.451 "strip_size_kb": 64, 00:21:54.451 "state": "online", 00:21:54.451 "raid_level": "raid5f", 00:21:54.451 "superblock": false, 00:21:54.451 "num_base_bdevs": 3, 00:21:54.451 "num_base_bdevs_discovered": 3, 00:21:54.451 "num_base_bdevs_operational": 3, 00:21:54.451 "process": { 00:21:54.451 "type": "rebuild", 00:21:54.451 "target": "spare", 00:21:54.451 "progress": { 00:21:54.451 "blocks": 22528, 00:21:54.451 "percent": 17 00:21:54.451 } 00:21:54.451 }, 00:21:54.451 "base_bdevs_list": [ 00:21:54.451 { 00:21:54.451 "name": "spare", 00:21:54.451 "uuid": "b5e0c876-9cfe-59bd-b4e0-b3aeb1a8d528", 00:21:54.451 "is_configured": true, 00:21:54.451 "data_offset": 0, 00:21:54.451 "data_size": 65536 00:21:54.451 }, 00:21:54.451 { 00:21:54.451 "name": "BaseBdev2", 00:21:54.451 "uuid": "009eba86-505b-5159-aaee-ac96d89e5f8a", 00:21:54.451 "is_configured": true, 00:21:54.451 "data_offset": 0, 00:21:54.451 "data_size": 65536 00:21:54.451 }, 00:21:54.451 { 00:21:54.451 "name": "BaseBdev3", 00:21:54.451 "uuid": "42a85a06-55ca-5682-85ff-b613c168ca38", 00:21:54.451 "is_configured": true, 00:21:54.451 "data_offset": 0, 00:21:54.451 "data_size": 65536 00:21:54.451 } 00:21:54.451 ] 00:21:54.451 }' 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:54.451 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.711 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:54.711 09:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:55.647 09:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:55.647 09:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.647 09:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.647 09:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:55.647 09:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:55.647 09:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.647 09:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.647 09:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.647 09:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.647 09:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.647 09:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.647 09:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.647 "name": "raid_bdev1", 00:21:55.647 "uuid": "de92dbc4-f7e2-422a-ac2b-b44be7929573", 00:21:55.647 "strip_size_kb": 64, 00:21:55.647 "state": "online", 00:21:55.647 "raid_level": "raid5f", 00:21:55.647 "superblock": false, 00:21:55.647 "num_base_bdevs": 3, 00:21:55.647 "num_base_bdevs_discovered": 3, 00:21:55.647 "num_base_bdevs_operational": 3, 00:21:55.647 "process": { 00:21:55.647 "type": "rebuild", 00:21:55.647 "target": "spare", 00:21:55.647 "progress": { 00:21:55.647 "blocks": 47104, 00:21:55.647 "percent": 35 00:21:55.647 } 00:21:55.647 }, 00:21:55.647 "base_bdevs_list": [ 00:21:55.647 { 00:21:55.647 "name": "spare", 00:21:55.647 "uuid": "b5e0c876-9cfe-59bd-b4e0-b3aeb1a8d528", 00:21:55.647 "is_configured": true, 00:21:55.647 "data_offset": 0, 00:21:55.647 "data_size": 65536 00:21:55.647 }, 00:21:55.647 { 00:21:55.647 "name": "BaseBdev2", 00:21:55.647 "uuid": "009eba86-505b-5159-aaee-ac96d89e5f8a", 00:21:55.647 "is_configured": true, 00:21:55.647 "data_offset": 0, 00:21:55.647 "data_size": 65536 00:21:55.647 }, 00:21:55.647 { 00:21:55.647 "name": "BaseBdev3", 00:21:55.647 "uuid": "42a85a06-55ca-5682-85ff-b613c168ca38", 00:21:55.647 "is_configured": true, 00:21:55.647 "data_offset": 0, 00:21:55.647 "data_size": 65536 00:21:55.647 } 00:21:55.647 ] 00:21:55.647 }' 00:21:55.647 09:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.647 09:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:55.647 09:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.907 09:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:55.907 09:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:56.843 09:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:56.843 09:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:56.843 09:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:56.843 09:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:56.843 09:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:56.843 09:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:56.843 09:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.843 09:21:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.843 09:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.843 09:21:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.843 09:21:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.843 09:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:56.843 "name": "raid_bdev1", 00:21:56.843 "uuid": "de92dbc4-f7e2-422a-ac2b-b44be7929573", 00:21:56.843 "strip_size_kb": 64, 00:21:56.843 "state": "online", 00:21:56.843 "raid_level": "raid5f", 00:21:56.843 "superblock": false, 00:21:56.843 "num_base_bdevs": 3, 00:21:56.843 "num_base_bdevs_discovered": 3, 00:21:56.843 "num_base_bdevs_operational": 3, 00:21:56.843 "process": { 00:21:56.843 "type": "rebuild", 00:21:56.843 "target": "spare", 00:21:56.843 "progress": { 00:21:56.843 "blocks": 69632, 00:21:56.843 "percent": 53 00:21:56.843 } 00:21:56.843 }, 00:21:56.843 "base_bdevs_list": [ 00:21:56.843 { 00:21:56.843 "name": "spare", 00:21:56.843 "uuid": "b5e0c876-9cfe-59bd-b4e0-b3aeb1a8d528", 00:21:56.843 "is_configured": true, 00:21:56.843 "data_offset": 0, 00:21:56.843 "data_size": 65536 00:21:56.843 }, 00:21:56.843 { 00:21:56.843 "name": "BaseBdev2", 00:21:56.843 "uuid": "009eba86-505b-5159-aaee-ac96d89e5f8a", 00:21:56.843 "is_configured": true, 00:21:56.843 "data_offset": 0, 00:21:56.843 "data_size": 65536 00:21:56.843 }, 00:21:56.843 { 00:21:56.843 "name": "BaseBdev3", 00:21:56.843 "uuid": "42a85a06-55ca-5682-85ff-b613c168ca38", 00:21:56.843 "is_configured": true, 00:21:56.843 "data_offset": 0, 00:21:56.843 "data_size": 65536 00:21:56.843 } 00:21:56.843 ] 00:21:56.843 }' 00:21:56.843 09:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:56.843 09:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:56.843 09:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:56.843 09:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:56.843 09:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:58.220 09:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:58.220 09:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:58.220 09:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:58.220 09:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:58.220 09:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:58.220 09:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:58.220 09:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.220 09:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.220 09:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.220 09:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.220 09:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.220 09:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:58.220 "name": "raid_bdev1", 00:21:58.220 "uuid": "de92dbc4-f7e2-422a-ac2b-b44be7929573", 00:21:58.220 "strip_size_kb": 64, 00:21:58.220 "state": "online", 00:21:58.220 "raid_level": "raid5f", 00:21:58.220 "superblock": false, 00:21:58.220 "num_base_bdevs": 3, 00:21:58.220 "num_base_bdevs_discovered": 3, 00:21:58.220 "num_base_bdevs_operational": 3, 00:21:58.220 "process": { 00:21:58.220 "type": "rebuild", 00:21:58.220 "target": "spare", 00:21:58.220 "progress": { 00:21:58.220 "blocks": 94208, 00:21:58.220 "percent": 71 00:21:58.220 } 00:21:58.220 }, 00:21:58.220 "base_bdevs_list": [ 00:21:58.220 { 00:21:58.220 "name": "spare", 00:21:58.220 "uuid": "b5e0c876-9cfe-59bd-b4e0-b3aeb1a8d528", 00:21:58.220 "is_configured": true, 00:21:58.220 "data_offset": 0, 00:21:58.220 "data_size": 65536 00:21:58.220 }, 00:21:58.220 { 00:21:58.220 "name": "BaseBdev2", 00:21:58.220 "uuid": "009eba86-505b-5159-aaee-ac96d89e5f8a", 00:21:58.220 "is_configured": true, 00:21:58.220 "data_offset": 0, 00:21:58.220 "data_size": 65536 00:21:58.220 }, 00:21:58.220 { 00:21:58.220 "name": "BaseBdev3", 00:21:58.220 "uuid": "42a85a06-55ca-5682-85ff-b613c168ca38", 00:21:58.220 "is_configured": true, 00:21:58.221 "data_offset": 0, 00:21:58.221 "data_size": 65536 00:21:58.221 } 00:21:58.221 ] 00:21:58.221 }' 00:21:58.221 09:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:58.221 09:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:58.221 09:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:58.221 09:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:58.221 09:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:59.156 09:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:59.156 09:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:59.156 09:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:59.156 09:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:59.156 09:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:59.156 09:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:59.156 09:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.156 09:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.156 09:21:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.156 09:21:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.156 09:21:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.156 09:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:59.156 "name": "raid_bdev1", 00:21:59.156 "uuid": "de92dbc4-f7e2-422a-ac2b-b44be7929573", 00:21:59.156 "strip_size_kb": 64, 00:21:59.156 "state": "online", 00:21:59.156 "raid_level": "raid5f", 00:21:59.156 "superblock": false, 00:21:59.156 "num_base_bdevs": 3, 00:21:59.156 "num_base_bdevs_discovered": 3, 00:21:59.156 "num_base_bdevs_operational": 3, 00:21:59.156 "process": { 00:21:59.156 "type": "rebuild", 00:21:59.156 "target": "spare", 00:21:59.156 "progress": { 00:21:59.156 "blocks": 116736, 00:21:59.156 "percent": 89 00:21:59.156 } 00:21:59.156 }, 00:21:59.156 "base_bdevs_list": [ 00:21:59.156 { 00:21:59.156 "name": "spare", 00:21:59.156 "uuid": "b5e0c876-9cfe-59bd-b4e0-b3aeb1a8d528", 00:21:59.156 "is_configured": true, 00:21:59.156 "data_offset": 0, 00:21:59.156 "data_size": 65536 00:21:59.156 }, 00:21:59.156 { 00:21:59.156 "name": "BaseBdev2", 00:21:59.156 "uuid": "009eba86-505b-5159-aaee-ac96d89e5f8a", 00:21:59.156 "is_configured": true, 00:21:59.156 "data_offset": 0, 00:21:59.156 "data_size": 65536 00:21:59.156 }, 00:21:59.156 { 00:21:59.156 "name": "BaseBdev3", 00:21:59.156 "uuid": "42a85a06-55ca-5682-85ff-b613c168ca38", 00:21:59.156 "is_configured": true, 00:21:59.156 "data_offset": 0, 00:21:59.156 "data_size": 65536 00:21:59.156 } 00:21:59.156 ] 00:21:59.156 }' 00:21:59.156 09:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:59.156 09:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:59.156 09:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:59.156 09:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:59.156 09:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:59.722 [2024-10-15 09:21:43.560814] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:59.722 [2024-10-15 09:21:43.560976] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:59.722 [2024-10-15 09:21:43.561048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.289 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:00.289 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:00.289 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:00.289 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:00.289 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:00.289 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:00.289 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.289 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.289 09:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.289 09:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.289 09:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.289 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:00.289 "name": "raid_bdev1", 00:22:00.289 "uuid": "de92dbc4-f7e2-422a-ac2b-b44be7929573", 00:22:00.289 "strip_size_kb": 64, 00:22:00.289 "state": "online", 00:22:00.289 "raid_level": "raid5f", 00:22:00.289 "superblock": false, 00:22:00.289 "num_base_bdevs": 3, 00:22:00.289 "num_base_bdevs_discovered": 3, 00:22:00.289 "num_base_bdevs_operational": 3, 00:22:00.289 "base_bdevs_list": [ 00:22:00.289 { 00:22:00.289 "name": "spare", 00:22:00.289 "uuid": "b5e0c876-9cfe-59bd-b4e0-b3aeb1a8d528", 00:22:00.289 "is_configured": true, 00:22:00.289 "data_offset": 0, 00:22:00.289 "data_size": 65536 00:22:00.289 }, 00:22:00.289 { 00:22:00.289 "name": "BaseBdev2", 00:22:00.289 "uuid": "009eba86-505b-5159-aaee-ac96d89e5f8a", 00:22:00.289 "is_configured": true, 00:22:00.289 "data_offset": 0, 00:22:00.289 "data_size": 65536 00:22:00.289 }, 00:22:00.289 { 00:22:00.289 "name": "BaseBdev3", 00:22:00.289 "uuid": "42a85a06-55ca-5682-85ff-b613c168ca38", 00:22:00.289 "is_configured": true, 00:22:00.289 "data_offset": 0, 00:22:00.289 "data_size": 65536 00:22:00.289 } 00:22:00.289 ] 00:22:00.289 }' 00:22:00.289 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:00.289 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:00.289 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:00.548 "name": "raid_bdev1", 00:22:00.548 "uuid": "de92dbc4-f7e2-422a-ac2b-b44be7929573", 00:22:00.548 "strip_size_kb": 64, 00:22:00.548 "state": "online", 00:22:00.548 "raid_level": "raid5f", 00:22:00.548 "superblock": false, 00:22:00.548 "num_base_bdevs": 3, 00:22:00.548 "num_base_bdevs_discovered": 3, 00:22:00.548 "num_base_bdevs_operational": 3, 00:22:00.548 "base_bdevs_list": [ 00:22:00.548 { 00:22:00.548 "name": "spare", 00:22:00.548 "uuid": "b5e0c876-9cfe-59bd-b4e0-b3aeb1a8d528", 00:22:00.548 "is_configured": true, 00:22:00.548 "data_offset": 0, 00:22:00.548 "data_size": 65536 00:22:00.548 }, 00:22:00.548 { 00:22:00.548 "name": "BaseBdev2", 00:22:00.548 "uuid": "009eba86-505b-5159-aaee-ac96d89e5f8a", 00:22:00.548 "is_configured": true, 00:22:00.548 "data_offset": 0, 00:22:00.548 "data_size": 65536 00:22:00.548 }, 00:22:00.548 { 00:22:00.548 "name": "BaseBdev3", 00:22:00.548 "uuid": "42a85a06-55ca-5682-85ff-b613c168ca38", 00:22:00.548 "is_configured": true, 00:22:00.548 "data_offset": 0, 00:22:00.548 "data_size": 65536 00:22:00.548 } 00:22:00.548 ] 00:22:00.548 }' 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.548 "name": "raid_bdev1", 00:22:00.548 "uuid": "de92dbc4-f7e2-422a-ac2b-b44be7929573", 00:22:00.548 "strip_size_kb": 64, 00:22:00.548 "state": "online", 00:22:00.548 "raid_level": "raid5f", 00:22:00.548 "superblock": false, 00:22:00.548 "num_base_bdevs": 3, 00:22:00.548 "num_base_bdevs_discovered": 3, 00:22:00.548 "num_base_bdevs_operational": 3, 00:22:00.548 "base_bdevs_list": [ 00:22:00.548 { 00:22:00.548 "name": "spare", 00:22:00.548 "uuid": "b5e0c876-9cfe-59bd-b4e0-b3aeb1a8d528", 00:22:00.548 "is_configured": true, 00:22:00.548 "data_offset": 0, 00:22:00.548 "data_size": 65536 00:22:00.548 }, 00:22:00.548 { 00:22:00.548 "name": "BaseBdev2", 00:22:00.548 "uuid": "009eba86-505b-5159-aaee-ac96d89e5f8a", 00:22:00.548 "is_configured": true, 00:22:00.548 "data_offset": 0, 00:22:00.548 "data_size": 65536 00:22:00.548 }, 00:22:00.548 { 00:22:00.548 "name": "BaseBdev3", 00:22:00.548 "uuid": "42a85a06-55ca-5682-85ff-b613c168ca38", 00:22:00.548 "is_configured": true, 00:22:00.548 "data_offset": 0, 00:22:00.548 "data_size": 65536 00:22:00.548 } 00:22:00.548 ] 00:22:00.548 }' 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.548 09:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.115 [2024-10-15 09:21:44.918879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:01.115 [2024-10-15 09:21:44.918919] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:01.115 [2024-10-15 09:21:44.919043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:01.115 [2024-10-15 09:21:44.919179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:01.115 [2024-10-15 09:21:44.919207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:01.115 09:21:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:01.374 /dev/nbd0 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:01.633 1+0 records in 00:22:01.633 1+0 records out 00:22:01.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421448 s, 9.7 MB/s 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:01.633 09:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:01.892 /dev/nbd1 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:01.892 1+0 records in 00:22:01.892 1+0 records out 00:22:01.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368261 s, 11.1 MB/s 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:01.892 09:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:02.150 09:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:02.150 09:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:02.150 09:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:02.150 09:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:02.150 09:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:02.150 09:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:02.150 09:21:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:02.410 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:02.410 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:02.410 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:02.410 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:02.410 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:02.410 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:02.410 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:02.410 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:02.410 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:02.410 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:02.668 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:02.668 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:02.668 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:02.668 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:02.668 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:02.668 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:02.668 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:02.668 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:02.668 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:22:02.668 09:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82245 00:22:02.668 09:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 82245 ']' 00:22:02.668 09:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 82245 00:22:02.669 09:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:22:02.669 09:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:02.669 09:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82245 00:22:02.669 killing process with pid 82245 00:22:02.669 Received shutdown signal, test time was about 60.000000 seconds 00:22:02.669 00:22:02.669 Latency(us) 00:22:02.669 [2024-10-15T09:21:46.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.669 [2024-10-15T09:21:46.597Z] =================================================================================================================== 00:22:02.669 [2024-10-15T09:21:46.597Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:02.669 09:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:02.669 09:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:02.669 09:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82245' 00:22:02.669 09:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 82245 00:22:02.669 [2024-10-15 09:21:46.482370] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:02.669 09:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 82245 00:22:03.235 [2024-10-15 09:21:46.887985] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:04.198 ************************************ 00:22:04.198 END TEST raid5f_rebuild_test 00:22:04.198 ************************************ 00:22:04.198 09:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:22:04.198 00:22:04.198 real 0m16.592s 00:22:04.198 user 0m21.093s 00:22:04.198 sys 0m2.085s 00:22:04.198 09:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:04.198 09:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.198 09:21:48 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:22:04.198 09:21:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:22:04.198 09:21:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:04.198 09:21:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:04.198 ************************************ 00:22:04.198 START TEST raid5f_rebuild_test_sb 00:22:04.198 ************************************ 00:22:04.198 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:22:04.198 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:22:04.198 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:22:04.198 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:04.198 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:04.198 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82698 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82698 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82698 ']' 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:04.458 09:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.458 [2024-10-15 09:21:48.251357] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:22:04.458 [2024-10-15 09:21:48.251878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82698 ] 00:22:04.458 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:04.458 Zero copy mechanism will not be used. 00:22:04.716 [2024-10-15 09:21:48.425152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.716 [2024-10-15 09:21:48.602136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.975 [2024-10-15 09:21:48.839204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:04.975 [2024-10-15 09:21:48.839521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:05.543 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:05.543 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.544 BaseBdev1_malloc 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.544 [2024-10-15 09:21:49.313680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:05.544 [2024-10-15 09:21:49.313774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:05.544 [2024-10-15 09:21:49.313814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:05.544 [2024-10-15 09:21:49.313834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:05.544 [2024-10-15 09:21:49.316920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:05.544 [2024-10-15 09:21:49.317112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:05.544 BaseBdev1 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.544 BaseBdev2_malloc 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.544 [2024-10-15 09:21:49.370198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:05.544 [2024-10-15 09:21:49.370309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:05.544 [2024-10-15 09:21:49.370351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:05.544 [2024-10-15 09:21:49.370371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:05.544 [2024-10-15 09:21:49.373327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:05.544 [2024-10-15 09:21:49.373391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:05.544 BaseBdev2 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.544 BaseBdev3_malloc 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.544 [2024-10-15 09:21:49.437099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:05.544 [2024-10-15 09:21:49.437186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:05.544 [2024-10-15 09:21:49.437223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:05.544 [2024-10-15 09:21:49.437253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:05.544 [2024-10-15 09:21:49.440111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:05.544 [2024-10-15 09:21:49.440189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:05.544 BaseBdev3 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.544 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.803 spare_malloc 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.803 spare_delay 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.803 [2024-10-15 09:21:49.500979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:05.803 [2024-10-15 09:21:49.501199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:05.803 [2024-10-15 09:21:49.501239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:05.803 [2024-10-15 09:21:49.501259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:05.803 [2024-10-15 09:21:49.504179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:05.803 [2024-10-15 09:21:49.504346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:05.803 spare 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.803 [2024-10-15 09:21:49.513196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:05.803 [2024-10-15 09:21:49.515737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:05.803 [2024-10-15 09:21:49.515992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:05.803 [2024-10-15 09:21:49.516272] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:05.803 [2024-10-15 09:21:49.516296] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:05.803 [2024-10-15 09:21:49.516659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:05.803 [2024-10-15 09:21:49.521931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:05.803 [2024-10-15 09:21:49.521962] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:05.803 [2024-10-15 09:21:49.522205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.803 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.804 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.804 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.804 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.804 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.804 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:05.804 "name": "raid_bdev1", 00:22:05.804 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:05.804 "strip_size_kb": 64, 00:22:05.804 "state": "online", 00:22:05.804 "raid_level": "raid5f", 00:22:05.804 "superblock": true, 00:22:05.804 "num_base_bdevs": 3, 00:22:05.804 "num_base_bdevs_discovered": 3, 00:22:05.804 "num_base_bdevs_operational": 3, 00:22:05.804 "base_bdevs_list": [ 00:22:05.804 { 00:22:05.804 "name": "BaseBdev1", 00:22:05.804 "uuid": "4b06ccab-7f5b-5679-aa42-d166874586d7", 00:22:05.804 "is_configured": true, 00:22:05.804 "data_offset": 2048, 00:22:05.804 "data_size": 63488 00:22:05.804 }, 00:22:05.804 { 00:22:05.804 "name": "BaseBdev2", 00:22:05.804 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:05.804 "is_configured": true, 00:22:05.804 "data_offset": 2048, 00:22:05.804 "data_size": 63488 00:22:05.804 }, 00:22:05.804 { 00:22:05.804 "name": "BaseBdev3", 00:22:05.804 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:05.804 "is_configured": true, 00:22:05.804 "data_offset": 2048, 00:22:05.804 "data_size": 63488 00:22:05.804 } 00:22:05.804 ] 00:22:05.804 }' 00:22:05.804 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:05.804 09:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.371 [2024-10-15 09:21:50.032819] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:06.371 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:06.630 [2024-10-15 09:21:50.420668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:06.630 /dev/nbd0 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:06.630 1+0 records in 00:22:06.630 1+0 records out 00:22:06.630 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624681 s, 6.6 MB/s 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:22:06.630 09:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:22:07.199 496+0 records in 00:22:07.199 496+0 records out 00:22:07.199 65011712 bytes (65 MB, 62 MiB) copied, 0.507643 s, 128 MB/s 00:22:07.199 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:07.199 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:07.199 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:07.199 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:07.199 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:07.199 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:07.199 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:07.460 [2024-10-15 09:21:51.321934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.460 [2024-10-15 09:21:51.336152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.460 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.720 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.720 "name": "raid_bdev1", 00:22:07.720 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:07.720 "strip_size_kb": 64, 00:22:07.720 "state": "online", 00:22:07.720 "raid_level": "raid5f", 00:22:07.720 "superblock": true, 00:22:07.720 "num_base_bdevs": 3, 00:22:07.720 "num_base_bdevs_discovered": 2, 00:22:07.720 "num_base_bdevs_operational": 2, 00:22:07.720 "base_bdevs_list": [ 00:22:07.720 { 00:22:07.720 "name": null, 00:22:07.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.720 "is_configured": false, 00:22:07.720 "data_offset": 0, 00:22:07.720 "data_size": 63488 00:22:07.720 }, 00:22:07.720 { 00:22:07.720 "name": "BaseBdev2", 00:22:07.720 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:07.720 "is_configured": true, 00:22:07.720 "data_offset": 2048, 00:22:07.720 "data_size": 63488 00:22:07.720 }, 00:22:07.720 { 00:22:07.720 "name": "BaseBdev3", 00:22:07.720 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:07.720 "is_configured": true, 00:22:07.720 "data_offset": 2048, 00:22:07.720 "data_size": 63488 00:22:07.720 } 00:22:07.720 ] 00:22:07.720 }' 00:22:07.720 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.720 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.979 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:07.979 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.979 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.979 [2024-10-15 09:21:51.856319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:07.979 [2024-10-15 09:21:51.872551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:22:07.979 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.979 09:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:07.979 [2024-10-15 09:21:51.880271] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:09.356 09:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:09.356 09:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:09.356 09:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:09.356 09:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:09.356 09:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:09.356 09:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.356 09:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.356 09:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.356 09:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.356 09:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.356 09:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:09.356 "name": "raid_bdev1", 00:22:09.356 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:09.356 "strip_size_kb": 64, 00:22:09.356 "state": "online", 00:22:09.356 "raid_level": "raid5f", 00:22:09.356 "superblock": true, 00:22:09.356 "num_base_bdevs": 3, 00:22:09.356 "num_base_bdevs_discovered": 3, 00:22:09.356 "num_base_bdevs_operational": 3, 00:22:09.356 "process": { 00:22:09.356 "type": "rebuild", 00:22:09.356 "target": "spare", 00:22:09.356 "progress": { 00:22:09.356 "blocks": 18432, 00:22:09.356 "percent": 14 00:22:09.356 } 00:22:09.356 }, 00:22:09.356 "base_bdevs_list": [ 00:22:09.356 { 00:22:09.356 "name": "spare", 00:22:09.356 "uuid": "eed8b265-0466-5d37-b420-b70ce733eeaf", 00:22:09.356 "is_configured": true, 00:22:09.356 "data_offset": 2048, 00:22:09.356 "data_size": 63488 00:22:09.356 }, 00:22:09.356 { 00:22:09.356 "name": "BaseBdev2", 00:22:09.356 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:09.356 "is_configured": true, 00:22:09.356 "data_offset": 2048, 00:22:09.356 "data_size": 63488 00:22:09.356 }, 00:22:09.356 { 00:22:09.356 "name": "BaseBdev3", 00:22:09.356 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:09.356 "is_configured": true, 00:22:09.356 "data_offset": 2048, 00:22:09.356 "data_size": 63488 00:22:09.356 } 00:22:09.356 ] 00:22:09.356 }' 00:22:09.356 09:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:09.356 09:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:09.356 09:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.356 [2024-10-15 09:21:53.037890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:09.356 [2024-10-15 09:21:53.097983] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:09.356 [2024-10-15 09:21:53.098084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.356 [2024-10-15 09:21:53.098133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:09.356 [2024-10-15 09:21:53.098150] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.356 "name": "raid_bdev1", 00:22:09.356 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:09.356 "strip_size_kb": 64, 00:22:09.356 "state": "online", 00:22:09.356 "raid_level": "raid5f", 00:22:09.356 "superblock": true, 00:22:09.356 "num_base_bdevs": 3, 00:22:09.356 "num_base_bdevs_discovered": 2, 00:22:09.356 "num_base_bdevs_operational": 2, 00:22:09.356 "base_bdevs_list": [ 00:22:09.356 { 00:22:09.356 "name": null, 00:22:09.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.356 "is_configured": false, 00:22:09.356 "data_offset": 0, 00:22:09.356 "data_size": 63488 00:22:09.356 }, 00:22:09.356 { 00:22:09.356 "name": "BaseBdev2", 00:22:09.356 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:09.356 "is_configured": true, 00:22:09.356 "data_offset": 2048, 00:22:09.356 "data_size": 63488 00:22:09.356 }, 00:22:09.356 { 00:22:09.356 "name": "BaseBdev3", 00:22:09.356 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:09.356 "is_configured": true, 00:22:09.356 "data_offset": 2048, 00:22:09.356 "data_size": 63488 00:22:09.356 } 00:22:09.356 ] 00:22:09.356 }' 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.356 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.924 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:09.924 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:09.924 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:09.924 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:09.924 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:09.924 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.924 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.924 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.924 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.924 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.924 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:09.924 "name": "raid_bdev1", 00:22:09.924 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:09.924 "strip_size_kb": 64, 00:22:09.924 "state": "online", 00:22:09.924 "raid_level": "raid5f", 00:22:09.924 "superblock": true, 00:22:09.924 "num_base_bdevs": 3, 00:22:09.924 "num_base_bdevs_discovered": 2, 00:22:09.924 "num_base_bdevs_operational": 2, 00:22:09.924 "base_bdevs_list": [ 00:22:09.924 { 00:22:09.924 "name": null, 00:22:09.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.924 "is_configured": false, 00:22:09.924 "data_offset": 0, 00:22:09.924 "data_size": 63488 00:22:09.924 }, 00:22:09.924 { 00:22:09.924 "name": "BaseBdev2", 00:22:09.924 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:09.924 "is_configured": true, 00:22:09.925 "data_offset": 2048, 00:22:09.925 "data_size": 63488 00:22:09.925 }, 00:22:09.925 { 00:22:09.925 "name": "BaseBdev3", 00:22:09.925 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:09.925 "is_configured": true, 00:22:09.925 "data_offset": 2048, 00:22:09.925 "data_size": 63488 00:22:09.925 } 00:22:09.925 ] 00:22:09.925 }' 00:22:09.925 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:09.925 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:09.925 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:09.925 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:09.925 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:09.925 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.925 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.925 [2024-10-15 09:21:53.795517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:09.925 [2024-10-15 09:21:53.810903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:22:09.925 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.925 09:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:09.925 [2024-10-15 09:21:53.818489] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:11.402 "name": "raid_bdev1", 00:22:11.402 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:11.402 "strip_size_kb": 64, 00:22:11.402 "state": "online", 00:22:11.402 "raid_level": "raid5f", 00:22:11.402 "superblock": true, 00:22:11.402 "num_base_bdevs": 3, 00:22:11.402 "num_base_bdevs_discovered": 3, 00:22:11.402 "num_base_bdevs_operational": 3, 00:22:11.402 "process": { 00:22:11.402 "type": "rebuild", 00:22:11.402 "target": "spare", 00:22:11.402 "progress": { 00:22:11.402 "blocks": 18432, 00:22:11.402 "percent": 14 00:22:11.402 } 00:22:11.402 }, 00:22:11.402 "base_bdevs_list": [ 00:22:11.402 { 00:22:11.402 "name": "spare", 00:22:11.402 "uuid": "eed8b265-0466-5d37-b420-b70ce733eeaf", 00:22:11.402 "is_configured": true, 00:22:11.402 "data_offset": 2048, 00:22:11.402 "data_size": 63488 00:22:11.402 }, 00:22:11.402 { 00:22:11.402 "name": "BaseBdev2", 00:22:11.402 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:11.402 "is_configured": true, 00:22:11.402 "data_offset": 2048, 00:22:11.402 "data_size": 63488 00:22:11.402 }, 00:22:11.402 { 00:22:11.402 "name": "BaseBdev3", 00:22:11.402 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:11.402 "is_configured": true, 00:22:11.402 "data_offset": 2048, 00:22:11.402 "data_size": 63488 00:22:11.402 } 00:22:11.402 ] 00:22:11.402 }' 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:11.402 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=625 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.402 09:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.402 09:21:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.402 09:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:11.402 "name": "raid_bdev1", 00:22:11.402 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:11.402 "strip_size_kb": 64, 00:22:11.402 "state": "online", 00:22:11.402 "raid_level": "raid5f", 00:22:11.402 "superblock": true, 00:22:11.402 "num_base_bdevs": 3, 00:22:11.402 "num_base_bdevs_discovered": 3, 00:22:11.402 "num_base_bdevs_operational": 3, 00:22:11.402 "process": { 00:22:11.402 "type": "rebuild", 00:22:11.402 "target": "spare", 00:22:11.402 "progress": { 00:22:11.402 "blocks": 22528, 00:22:11.402 "percent": 17 00:22:11.402 } 00:22:11.402 }, 00:22:11.402 "base_bdevs_list": [ 00:22:11.402 { 00:22:11.402 "name": "spare", 00:22:11.402 "uuid": "eed8b265-0466-5d37-b420-b70ce733eeaf", 00:22:11.402 "is_configured": true, 00:22:11.402 "data_offset": 2048, 00:22:11.402 "data_size": 63488 00:22:11.402 }, 00:22:11.402 { 00:22:11.402 "name": "BaseBdev2", 00:22:11.402 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:11.402 "is_configured": true, 00:22:11.402 "data_offset": 2048, 00:22:11.402 "data_size": 63488 00:22:11.402 }, 00:22:11.402 { 00:22:11.402 "name": "BaseBdev3", 00:22:11.402 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:11.402 "is_configured": true, 00:22:11.402 "data_offset": 2048, 00:22:11.402 "data_size": 63488 00:22:11.402 } 00:22:11.402 ] 00:22:11.402 }' 00:22:11.402 09:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:11.402 09:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:11.402 09:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:11.403 09:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:11.403 09:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:12.337 09:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:12.337 09:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:12.337 09:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:12.337 09:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:12.337 09:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:12.337 09:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:12.337 09:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.337 09:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.337 09:21:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.337 09:21:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.337 09:21:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.337 09:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:12.337 "name": "raid_bdev1", 00:22:12.337 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:12.337 "strip_size_kb": 64, 00:22:12.337 "state": "online", 00:22:12.337 "raid_level": "raid5f", 00:22:12.337 "superblock": true, 00:22:12.337 "num_base_bdevs": 3, 00:22:12.337 "num_base_bdevs_discovered": 3, 00:22:12.337 "num_base_bdevs_operational": 3, 00:22:12.337 "process": { 00:22:12.337 "type": "rebuild", 00:22:12.337 "target": "spare", 00:22:12.337 "progress": { 00:22:12.337 "blocks": 47104, 00:22:12.337 "percent": 37 00:22:12.337 } 00:22:12.337 }, 00:22:12.337 "base_bdevs_list": [ 00:22:12.337 { 00:22:12.337 "name": "spare", 00:22:12.337 "uuid": "eed8b265-0466-5d37-b420-b70ce733eeaf", 00:22:12.337 "is_configured": true, 00:22:12.337 "data_offset": 2048, 00:22:12.337 "data_size": 63488 00:22:12.337 }, 00:22:12.337 { 00:22:12.337 "name": "BaseBdev2", 00:22:12.337 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:12.337 "is_configured": true, 00:22:12.337 "data_offset": 2048, 00:22:12.337 "data_size": 63488 00:22:12.337 }, 00:22:12.337 { 00:22:12.337 "name": "BaseBdev3", 00:22:12.337 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:12.337 "is_configured": true, 00:22:12.337 "data_offset": 2048, 00:22:12.337 "data_size": 63488 00:22:12.337 } 00:22:12.337 ] 00:22:12.337 }' 00:22:12.337 09:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:12.337 09:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:12.337 09:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:12.594 09:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:12.594 09:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:13.528 09:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:13.528 09:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:13.528 09:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:13.528 09:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:13.528 09:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:13.528 09:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:13.528 09:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.528 09:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.528 09:21:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.528 09:21:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.528 09:21:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.528 09:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:13.528 "name": "raid_bdev1", 00:22:13.528 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:13.528 "strip_size_kb": 64, 00:22:13.528 "state": "online", 00:22:13.528 "raid_level": "raid5f", 00:22:13.528 "superblock": true, 00:22:13.528 "num_base_bdevs": 3, 00:22:13.528 "num_base_bdevs_discovered": 3, 00:22:13.528 "num_base_bdevs_operational": 3, 00:22:13.528 "process": { 00:22:13.528 "type": "rebuild", 00:22:13.528 "target": "spare", 00:22:13.528 "progress": { 00:22:13.528 "blocks": 69632, 00:22:13.528 "percent": 54 00:22:13.528 } 00:22:13.528 }, 00:22:13.528 "base_bdevs_list": [ 00:22:13.528 { 00:22:13.528 "name": "spare", 00:22:13.528 "uuid": "eed8b265-0466-5d37-b420-b70ce733eeaf", 00:22:13.528 "is_configured": true, 00:22:13.528 "data_offset": 2048, 00:22:13.528 "data_size": 63488 00:22:13.528 }, 00:22:13.528 { 00:22:13.528 "name": "BaseBdev2", 00:22:13.528 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:13.528 "is_configured": true, 00:22:13.528 "data_offset": 2048, 00:22:13.528 "data_size": 63488 00:22:13.528 }, 00:22:13.528 { 00:22:13.528 "name": "BaseBdev3", 00:22:13.528 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:13.528 "is_configured": true, 00:22:13.528 "data_offset": 2048, 00:22:13.528 "data_size": 63488 00:22:13.528 } 00:22:13.528 ] 00:22:13.528 }' 00:22:13.528 09:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:13.528 09:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:13.528 09:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:13.528 09:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:13.528 09:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:14.903 09:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:14.903 09:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:14.903 09:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:14.903 09:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:14.903 09:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:14.903 09:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:14.903 09:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.903 09:21:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.903 09:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.903 09:21:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.903 09:21:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.903 09:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:14.903 "name": "raid_bdev1", 00:22:14.903 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:14.903 "strip_size_kb": 64, 00:22:14.903 "state": "online", 00:22:14.903 "raid_level": "raid5f", 00:22:14.903 "superblock": true, 00:22:14.903 "num_base_bdevs": 3, 00:22:14.903 "num_base_bdevs_discovered": 3, 00:22:14.903 "num_base_bdevs_operational": 3, 00:22:14.903 "process": { 00:22:14.903 "type": "rebuild", 00:22:14.903 "target": "spare", 00:22:14.903 "progress": { 00:22:14.903 "blocks": 92160, 00:22:14.903 "percent": 72 00:22:14.903 } 00:22:14.903 }, 00:22:14.903 "base_bdevs_list": [ 00:22:14.903 { 00:22:14.903 "name": "spare", 00:22:14.903 "uuid": "eed8b265-0466-5d37-b420-b70ce733eeaf", 00:22:14.903 "is_configured": true, 00:22:14.903 "data_offset": 2048, 00:22:14.903 "data_size": 63488 00:22:14.903 }, 00:22:14.903 { 00:22:14.903 "name": "BaseBdev2", 00:22:14.903 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:14.903 "is_configured": true, 00:22:14.903 "data_offset": 2048, 00:22:14.903 "data_size": 63488 00:22:14.903 }, 00:22:14.903 { 00:22:14.903 "name": "BaseBdev3", 00:22:14.903 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:14.903 "is_configured": true, 00:22:14.903 "data_offset": 2048, 00:22:14.903 "data_size": 63488 00:22:14.903 } 00:22:14.903 ] 00:22:14.903 }' 00:22:14.903 09:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:14.903 09:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:14.903 09:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:14.903 09:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:14.903 09:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:15.840 09:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:15.840 09:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:15.840 09:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:15.840 09:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:15.840 09:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:15.840 09:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:15.840 09:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.840 09:21:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.840 09:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.840 09:21:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.840 09:21:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.840 09:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:15.840 "name": "raid_bdev1", 00:22:15.840 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:15.841 "strip_size_kb": 64, 00:22:15.841 "state": "online", 00:22:15.841 "raid_level": "raid5f", 00:22:15.841 "superblock": true, 00:22:15.841 "num_base_bdevs": 3, 00:22:15.841 "num_base_bdevs_discovered": 3, 00:22:15.841 "num_base_bdevs_operational": 3, 00:22:15.841 "process": { 00:22:15.841 "type": "rebuild", 00:22:15.841 "target": "spare", 00:22:15.841 "progress": { 00:22:15.841 "blocks": 116736, 00:22:15.841 "percent": 91 00:22:15.841 } 00:22:15.841 }, 00:22:15.841 "base_bdevs_list": [ 00:22:15.841 { 00:22:15.841 "name": "spare", 00:22:15.841 "uuid": "eed8b265-0466-5d37-b420-b70ce733eeaf", 00:22:15.841 "is_configured": true, 00:22:15.841 "data_offset": 2048, 00:22:15.841 "data_size": 63488 00:22:15.841 }, 00:22:15.841 { 00:22:15.841 "name": "BaseBdev2", 00:22:15.841 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:15.841 "is_configured": true, 00:22:15.841 "data_offset": 2048, 00:22:15.841 "data_size": 63488 00:22:15.841 }, 00:22:15.841 { 00:22:15.841 "name": "BaseBdev3", 00:22:15.841 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:15.841 "is_configured": true, 00:22:15.841 "data_offset": 2048, 00:22:15.841 "data_size": 63488 00:22:15.841 } 00:22:15.841 ] 00:22:15.841 }' 00:22:15.841 09:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:15.841 09:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:15.841 09:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:16.099 09:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:16.099 09:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:16.358 [2024-10-15 09:22:00.110145] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:16.358 [2024-10-15 09:22:00.110293] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:16.358 [2024-10-15 09:22:00.110501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.925 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:16.925 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:16.925 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:16.925 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:16.925 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:16.925 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:16.925 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.925 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.925 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.925 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.925 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.183 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:17.183 "name": "raid_bdev1", 00:22:17.183 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:17.183 "strip_size_kb": 64, 00:22:17.183 "state": "online", 00:22:17.183 "raid_level": "raid5f", 00:22:17.183 "superblock": true, 00:22:17.183 "num_base_bdevs": 3, 00:22:17.183 "num_base_bdevs_discovered": 3, 00:22:17.183 "num_base_bdevs_operational": 3, 00:22:17.183 "base_bdevs_list": [ 00:22:17.183 { 00:22:17.183 "name": "spare", 00:22:17.183 "uuid": "eed8b265-0466-5d37-b420-b70ce733eeaf", 00:22:17.183 "is_configured": true, 00:22:17.183 "data_offset": 2048, 00:22:17.183 "data_size": 63488 00:22:17.183 }, 00:22:17.183 { 00:22:17.183 "name": "BaseBdev2", 00:22:17.183 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:17.183 "is_configured": true, 00:22:17.183 "data_offset": 2048, 00:22:17.183 "data_size": 63488 00:22:17.183 }, 00:22:17.183 { 00:22:17.183 "name": "BaseBdev3", 00:22:17.183 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:17.183 "is_configured": true, 00:22:17.183 "data_offset": 2048, 00:22:17.183 "data_size": 63488 00:22:17.183 } 00:22:17.183 ] 00:22:17.183 }' 00:22:17.183 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:17.183 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:17.183 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:17.183 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:17.183 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:22:17.183 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:17.183 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:17.183 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:17.183 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:17.183 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:17.183 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.183 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.183 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.183 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.183 09:22:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.183 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:17.183 "name": "raid_bdev1", 00:22:17.183 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:17.183 "strip_size_kb": 64, 00:22:17.183 "state": "online", 00:22:17.183 "raid_level": "raid5f", 00:22:17.183 "superblock": true, 00:22:17.183 "num_base_bdevs": 3, 00:22:17.183 "num_base_bdevs_discovered": 3, 00:22:17.183 "num_base_bdevs_operational": 3, 00:22:17.183 "base_bdevs_list": [ 00:22:17.183 { 00:22:17.183 "name": "spare", 00:22:17.183 "uuid": "eed8b265-0466-5d37-b420-b70ce733eeaf", 00:22:17.183 "is_configured": true, 00:22:17.183 "data_offset": 2048, 00:22:17.183 "data_size": 63488 00:22:17.183 }, 00:22:17.183 { 00:22:17.183 "name": "BaseBdev2", 00:22:17.183 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:17.183 "is_configured": true, 00:22:17.183 "data_offset": 2048, 00:22:17.183 "data_size": 63488 00:22:17.183 }, 00:22:17.183 { 00:22:17.183 "name": "BaseBdev3", 00:22:17.183 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:17.183 "is_configured": true, 00:22:17.183 "data_offset": 2048, 00:22:17.183 "data_size": 63488 00:22:17.183 } 00:22:17.183 ] 00:22:17.183 }' 00:22:17.183 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:17.183 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:17.183 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.442 "name": "raid_bdev1", 00:22:17.442 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:17.442 "strip_size_kb": 64, 00:22:17.442 "state": "online", 00:22:17.442 "raid_level": "raid5f", 00:22:17.442 "superblock": true, 00:22:17.442 "num_base_bdevs": 3, 00:22:17.442 "num_base_bdevs_discovered": 3, 00:22:17.442 "num_base_bdevs_operational": 3, 00:22:17.442 "base_bdevs_list": [ 00:22:17.442 { 00:22:17.442 "name": "spare", 00:22:17.442 "uuid": "eed8b265-0466-5d37-b420-b70ce733eeaf", 00:22:17.442 "is_configured": true, 00:22:17.442 "data_offset": 2048, 00:22:17.442 "data_size": 63488 00:22:17.442 }, 00:22:17.442 { 00:22:17.442 "name": "BaseBdev2", 00:22:17.442 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:17.442 "is_configured": true, 00:22:17.442 "data_offset": 2048, 00:22:17.442 "data_size": 63488 00:22:17.442 }, 00:22:17.442 { 00:22:17.442 "name": "BaseBdev3", 00:22:17.442 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:17.442 "is_configured": true, 00:22:17.442 "data_offset": 2048, 00:22:17.442 "data_size": 63488 00:22:17.442 } 00:22:17.442 ] 00:22:17.442 }' 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.442 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.009 [2024-10-15 09:22:01.712419] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:18.009 [2024-10-15 09:22:01.712463] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:18.009 [2024-10-15 09:22:01.712592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:18.009 [2024-10-15 09:22:01.712710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:18.009 [2024-10-15 09:22:01.712736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:18.009 09:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:18.267 /dev/nbd0 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:18.267 1+0 records in 00:22:18.267 1+0 records out 00:22:18.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036716 s, 11.2 MB/s 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:18.267 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:18.581 /dev/nbd1 00:22:18.581 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:18.840 1+0 records in 00:22:18.840 1+0 records out 00:22:18.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348457 s, 11.8 MB/s 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:18.840 09:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:19.406 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:19.406 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:19.406 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:19.406 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:19.406 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:19.406 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:19.406 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:19.406 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:19.406 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:19.406 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.665 [2024-10-15 09:22:03.410440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:19.665 [2024-10-15 09:22:03.410518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.665 [2024-10-15 09:22:03.410551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:19.665 [2024-10-15 09:22:03.410570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.665 [2024-10-15 09:22:03.414185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.665 [2024-10-15 09:22:03.414391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:19.665 [2024-10-15 09:22:03.414661] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:19.665 [2024-10-15 09:22:03.414880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:19.665 [2024-10-15 09:22:03.415259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:19.665 spare 00:22:19.665 [2024-10-15 09:22:03.415554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.665 [2024-10-15 09:22:03.515701] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:19.665 [2024-10-15 09:22:03.515759] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:19.665 [2024-10-15 09:22:03.516257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:22:19.665 [2024-10-15 09:22:03.521401] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:19.665 [2024-10-15 09:22:03.521427] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:19.665 [2024-10-15 09:22:03.521733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.665 "name": "raid_bdev1", 00:22:19.665 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:19.665 "strip_size_kb": 64, 00:22:19.665 "state": "online", 00:22:19.665 "raid_level": "raid5f", 00:22:19.665 "superblock": true, 00:22:19.665 "num_base_bdevs": 3, 00:22:19.665 "num_base_bdevs_discovered": 3, 00:22:19.665 "num_base_bdevs_operational": 3, 00:22:19.665 "base_bdevs_list": [ 00:22:19.665 { 00:22:19.665 "name": "spare", 00:22:19.665 "uuid": "eed8b265-0466-5d37-b420-b70ce733eeaf", 00:22:19.665 "is_configured": true, 00:22:19.665 "data_offset": 2048, 00:22:19.665 "data_size": 63488 00:22:19.665 }, 00:22:19.665 { 00:22:19.665 "name": "BaseBdev2", 00:22:19.665 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:19.665 "is_configured": true, 00:22:19.665 "data_offset": 2048, 00:22:19.665 "data_size": 63488 00:22:19.665 }, 00:22:19.665 { 00:22:19.665 "name": "BaseBdev3", 00:22:19.665 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:19.665 "is_configured": true, 00:22:19.665 "data_offset": 2048, 00:22:19.665 "data_size": 63488 00:22:19.665 } 00:22:19.665 ] 00:22:19.665 }' 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.665 09:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.232 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:20.233 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:20.233 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:20.233 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:20.233 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:20.233 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.233 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.233 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.233 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.233 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.233 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:20.233 "name": "raid_bdev1", 00:22:20.233 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:20.233 "strip_size_kb": 64, 00:22:20.233 "state": "online", 00:22:20.233 "raid_level": "raid5f", 00:22:20.233 "superblock": true, 00:22:20.233 "num_base_bdevs": 3, 00:22:20.233 "num_base_bdevs_discovered": 3, 00:22:20.233 "num_base_bdevs_operational": 3, 00:22:20.233 "base_bdevs_list": [ 00:22:20.233 { 00:22:20.233 "name": "spare", 00:22:20.233 "uuid": "eed8b265-0466-5d37-b420-b70ce733eeaf", 00:22:20.233 "is_configured": true, 00:22:20.233 "data_offset": 2048, 00:22:20.233 "data_size": 63488 00:22:20.233 }, 00:22:20.233 { 00:22:20.233 "name": "BaseBdev2", 00:22:20.233 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:20.233 "is_configured": true, 00:22:20.233 "data_offset": 2048, 00:22:20.233 "data_size": 63488 00:22:20.233 }, 00:22:20.233 { 00:22:20.233 "name": "BaseBdev3", 00:22:20.233 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:20.233 "is_configured": true, 00:22:20.233 "data_offset": 2048, 00:22:20.233 "data_size": 63488 00:22:20.233 } 00:22:20.233 ] 00:22:20.233 }' 00:22:20.233 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.492 [2024-10-15 09:22:04.280018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.492 "name": "raid_bdev1", 00:22:20.492 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:20.492 "strip_size_kb": 64, 00:22:20.492 "state": "online", 00:22:20.492 "raid_level": "raid5f", 00:22:20.492 "superblock": true, 00:22:20.492 "num_base_bdevs": 3, 00:22:20.492 "num_base_bdevs_discovered": 2, 00:22:20.492 "num_base_bdevs_operational": 2, 00:22:20.492 "base_bdevs_list": [ 00:22:20.492 { 00:22:20.492 "name": null, 00:22:20.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.492 "is_configured": false, 00:22:20.492 "data_offset": 0, 00:22:20.492 "data_size": 63488 00:22:20.492 }, 00:22:20.492 { 00:22:20.492 "name": "BaseBdev2", 00:22:20.492 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:20.492 "is_configured": true, 00:22:20.492 "data_offset": 2048, 00:22:20.492 "data_size": 63488 00:22:20.492 }, 00:22:20.492 { 00:22:20.492 "name": "BaseBdev3", 00:22:20.492 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:20.492 "is_configured": true, 00:22:20.492 "data_offset": 2048, 00:22:20.492 "data_size": 63488 00:22:20.492 } 00:22:20.492 ] 00:22:20.492 }' 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.492 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.058 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:21.058 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.058 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.058 [2024-10-15 09:22:04.780209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:21.058 [2024-10-15 09:22:04.780619] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:21.058 [2024-10-15 09:22:04.780774] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:21.058 [2024-10-15 09:22:04.780835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:21.058 [2024-10-15 09:22:04.795825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:22:21.058 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.058 09:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:21.058 [2024-10-15 09:22:04.803371] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:21.993 09:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:21.993 09:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:21.993 09:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:21.993 09:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:21.993 09:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:21.993 09:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.993 09:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.993 09:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.993 09:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.993 09:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.993 09:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:21.993 "name": "raid_bdev1", 00:22:21.993 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:21.993 "strip_size_kb": 64, 00:22:21.993 "state": "online", 00:22:21.993 "raid_level": "raid5f", 00:22:21.993 "superblock": true, 00:22:21.993 "num_base_bdevs": 3, 00:22:21.993 "num_base_bdevs_discovered": 3, 00:22:21.993 "num_base_bdevs_operational": 3, 00:22:21.993 "process": { 00:22:21.993 "type": "rebuild", 00:22:21.993 "target": "spare", 00:22:21.993 "progress": { 00:22:21.993 "blocks": 18432, 00:22:21.993 "percent": 14 00:22:21.993 } 00:22:21.993 }, 00:22:21.993 "base_bdevs_list": [ 00:22:21.993 { 00:22:21.993 "name": "spare", 00:22:21.993 "uuid": "eed8b265-0466-5d37-b420-b70ce733eeaf", 00:22:21.993 "is_configured": true, 00:22:21.993 "data_offset": 2048, 00:22:21.993 "data_size": 63488 00:22:21.993 }, 00:22:21.993 { 00:22:21.993 "name": "BaseBdev2", 00:22:21.993 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:21.993 "is_configured": true, 00:22:21.993 "data_offset": 2048, 00:22:21.993 "data_size": 63488 00:22:21.993 }, 00:22:21.993 { 00:22:21.993 "name": "BaseBdev3", 00:22:21.993 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:21.993 "is_configured": true, 00:22:21.993 "data_offset": 2048, 00:22:21.993 "data_size": 63488 00:22:21.993 } 00:22:21.993 ] 00:22:21.993 }' 00:22:21.993 09:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:21.993 09:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:21.993 09:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:22.251 09:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:22.251 09:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:22.251 09:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.251 09:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.251 [2024-10-15 09:22:05.969028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:22.251 [2024-10-15 09:22:06.021315] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:22.251 [2024-10-15 09:22:06.021659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.251 [2024-10-15 09:22:06.021694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:22.252 [2024-10-15 09:22:06.021722] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.252 "name": "raid_bdev1", 00:22:22.252 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:22.252 "strip_size_kb": 64, 00:22:22.252 "state": "online", 00:22:22.252 "raid_level": "raid5f", 00:22:22.252 "superblock": true, 00:22:22.252 "num_base_bdevs": 3, 00:22:22.252 "num_base_bdevs_discovered": 2, 00:22:22.252 "num_base_bdevs_operational": 2, 00:22:22.252 "base_bdevs_list": [ 00:22:22.252 { 00:22:22.252 "name": null, 00:22:22.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.252 "is_configured": false, 00:22:22.252 "data_offset": 0, 00:22:22.252 "data_size": 63488 00:22:22.252 }, 00:22:22.252 { 00:22:22.252 "name": "BaseBdev2", 00:22:22.252 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:22.252 "is_configured": true, 00:22:22.252 "data_offset": 2048, 00:22:22.252 "data_size": 63488 00:22:22.252 }, 00:22:22.252 { 00:22:22.252 "name": "BaseBdev3", 00:22:22.252 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:22.252 "is_configured": true, 00:22:22.252 "data_offset": 2048, 00:22:22.252 "data_size": 63488 00:22:22.252 } 00:22:22.252 ] 00:22:22.252 }' 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.252 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.819 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:22.819 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.819 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.819 [2024-10-15 09:22:06.575625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:22.819 [2024-10-15 09:22:06.575760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.819 [2024-10-15 09:22:06.575817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:22:22.819 [2024-10-15 09:22:06.575853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.819 [2024-10-15 09:22:06.576842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.819 [2024-10-15 09:22:06.576925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:22.819 [2024-10-15 09:22:06.577158] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:22.819 [2024-10-15 09:22:06.577198] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:22.819 [2024-10-15 09:22:06.577220] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:22.819 [2024-10-15 09:22:06.577269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:22.819 spare 00:22:22.819 [2024-10-15 09:22:06.599783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:22:22.819 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.819 09:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:22.819 [2024-10-15 09:22:06.607804] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:23.754 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:23.754 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:23.754 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:23.754 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:23.754 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:23.754 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.754 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.754 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.754 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.754 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.754 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:23.754 "name": "raid_bdev1", 00:22:23.754 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:23.754 "strip_size_kb": 64, 00:22:23.754 "state": "online", 00:22:23.754 "raid_level": "raid5f", 00:22:23.754 "superblock": true, 00:22:23.754 "num_base_bdevs": 3, 00:22:23.754 "num_base_bdevs_discovered": 3, 00:22:23.754 "num_base_bdevs_operational": 3, 00:22:23.754 "process": { 00:22:23.754 "type": "rebuild", 00:22:23.754 "target": "spare", 00:22:23.754 "progress": { 00:22:23.754 "blocks": 18432, 00:22:23.754 "percent": 14 00:22:23.754 } 00:22:23.754 }, 00:22:23.755 "base_bdevs_list": [ 00:22:23.755 { 00:22:23.755 "name": "spare", 00:22:23.755 "uuid": "eed8b265-0466-5d37-b420-b70ce733eeaf", 00:22:23.755 "is_configured": true, 00:22:23.755 "data_offset": 2048, 00:22:23.755 "data_size": 63488 00:22:23.755 }, 00:22:23.755 { 00:22:23.755 "name": "BaseBdev2", 00:22:23.755 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:23.755 "is_configured": true, 00:22:23.755 "data_offset": 2048, 00:22:23.755 "data_size": 63488 00:22:23.755 }, 00:22:23.755 { 00:22:23.755 "name": "BaseBdev3", 00:22:23.755 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:23.755 "is_configured": true, 00:22:23.755 "data_offset": 2048, 00:22:23.755 "data_size": 63488 00:22:23.755 } 00:22:23.755 ] 00:22:23.755 }' 00:22:23.755 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.014 [2024-10-15 09:22:07.762013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:24.014 [2024-10-15 09:22:07.826442] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:24.014 [2024-10-15 09:22:07.826836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.014 [2024-10-15 09:22:07.826877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:24.014 [2024-10-15 09:22:07.826891] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.014 "name": "raid_bdev1", 00:22:24.014 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:24.014 "strip_size_kb": 64, 00:22:24.014 "state": "online", 00:22:24.014 "raid_level": "raid5f", 00:22:24.014 "superblock": true, 00:22:24.014 "num_base_bdevs": 3, 00:22:24.014 "num_base_bdevs_discovered": 2, 00:22:24.014 "num_base_bdevs_operational": 2, 00:22:24.014 "base_bdevs_list": [ 00:22:24.014 { 00:22:24.014 "name": null, 00:22:24.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.014 "is_configured": false, 00:22:24.014 "data_offset": 0, 00:22:24.014 "data_size": 63488 00:22:24.014 }, 00:22:24.014 { 00:22:24.014 "name": "BaseBdev2", 00:22:24.014 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:24.014 "is_configured": true, 00:22:24.014 "data_offset": 2048, 00:22:24.014 "data_size": 63488 00:22:24.014 }, 00:22:24.014 { 00:22:24.014 "name": "BaseBdev3", 00:22:24.014 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:24.014 "is_configured": true, 00:22:24.014 "data_offset": 2048, 00:22:24.014 "data_size": 63488 00:22:24.014 } 00:22:24.014 ] 00:22:24.014 }' 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.014 09:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.580 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:24.580 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:24.580 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:24.580 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:24.580 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:24.580 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.580 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.580 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.580 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.580 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.580 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:24.580 "name": "raid_bdev1", 00:22:24.580 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:24.580 "strip_size_kb": 64, 00:22:24.580 "state": "online", 00:22:24.580 "raid_level": "raid5f", 00:22:24.580 "superblock": true, 00:22:24.581 "num_base_bdevs": 3, 00:22:24.581 "num_base_bdevs_discovered": 2, 00:22:24.581 "num_base_bdevs_operational": 2, 00:22:24.581 "base_bdevs_list": [ 00:22:24.581 { 00:22:24.581 "name": null, 00:22:24.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.581 "is_configured": false, 00:22:24.581 "data_offset": 0, 00:22:24.581 "data_size": 63488 00:22:24.581 }, 00:22:24.581 { 00:22:24.581 "name": "BaseBdev2", 00:22:24.581 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:24.581 "is_configured": true, 00:22:24.581 "data_offset": 2048, 00:22:24.581 "data_size": 63488 00:22:24.581 }, 00:22:24.581 { 00:22:24.581 "name": "BaseBdev3", 00:22:24.581 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:24.581 "is_configured": true, 00:22:24.581 "data_offset": 2048, 00:22:24.581 "data_size": 63488 00:22:24.581 } 00:22:24.581 ] 00:22:24.581 }' 00:22:24.581 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:24.581 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:24.581 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:24.840 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:24.840 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:24.840 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.840 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.840 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.840 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:24.840 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.840 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.840 [2024-10-15 09:22:08.548883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:24.840 [2024-10-15 09:22:08.548977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.840 [2024-10-15 09:22:08.549017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:24.840 [2024-10-15 09:22:08.549032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.840 [2024-10-15 09:22:08.549686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.840 [2024-10-15 09:22:08.549729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:24.840 [2024-10-15 09:22:08.549844] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:24.840 [2024-10-15 09:22:08.549867] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:24.840 [2024-10-15 09:22:08.549898] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:24.840 [2024-10-15 09:22:08.549916] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:24.840 BaseBdev1 00:22:24.840 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.840 09:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:25.854 09:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:25.854 09:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:25.854 09:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:25.854 09:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:25.854 09:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:25.854 09:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:25.854 09:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:25.854 09:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:25.854 09:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:25.854 09:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:25.854 09:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.854 09:22:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.854 09:22:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.854 09:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.854 09:22:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.854 09:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:25.854 "name": "raid_bdev1", 00:22:25.854 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:25.854 "strip_size_kb": 64, 00:22:25.854 "state": "online", 00:22:25.854 "raid_level": "raid5f", 00:22:25.854 "superblock": true, 00:22:25.854 "num_base_bdevs": 3, 00:22:25.854 "num_base_bdevs_discovered": 2, 00:22:25.854 "num_base_bdevs_operational": 2, 00:22:25.854 "base_bdevs_list": [ 00:22:25.854 { 00:22:25.854 "name": null, 00:22:25.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.854 "is_configured": false, 00:22:25.854 "data_offset": 0, 00:22:25.854 "data_size": 63488 00:22:25.854 }, 00:22:25.854 { 00:22:25.854 "name": "BaseBdev2", 00:22:25.854 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:25.854 "is_configured": true, 00:22:25.854 "data_offset": 2048, 00:22:25.854 "data_size": 63488 00:22:25.854 }, 00:22:25.854 { 00:22:25.854 "name": "BaseBdev3", 00:22:25.854 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:25.854 "is_configured": true, 00:22:25.854 "data_offset": 2048, 00:22:25.854 "data_size": 63488 00:22:25.854 } 00:22:25.854 ] 00:22:25.854 }' 00:22:25.854 09:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:25.854 09:22:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:26.423 "name": "raid_bdev1", 00:22:26.423 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:26.423 "strip_size_kb": 64, 00:22:26.423 "state": "online", 00:22:26.423 "raid_level": "raid5f", 00:22:26.423 "superblock": true, 00:22:26.423 "num_base_bdevs": 3, 00:22:26.423 "num_base_bdevs_discovered": 2, 00:22:26.423 "num_base_bdevs_operational": 2, 00:22:26.423 "base_bdevs_list": [ 00:22:26.423 { 00:22:26.423 "name": null, 00:22:26.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.423 "is_configured": false, 00:22:26.423 "data_offset": 0, 00:22:26.423 "data_size": 63488 00:22:26.423 }, 00:22:26.423 { 00:22:26.423 "name": "BaseBdev2", 00:22:26.423 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:26.423 "is_configured": true, 00:22:26.423 "data_offset": 2048, 00:22:26.423 "data_size": 63488 00:22:26.423 }, 00:22:26.423 { 00:22:26.423 "name": "BaseBdev3", 00:22:26.423 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:26.423 "is_configured": true, 00:22:26.423 "data_offset": 2048, 00:22:26.423 "data_size": 63488 00:22:26.423 } 00:22:26.423 ] 00:22:26.423 }' 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.423 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:26.424 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.424 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.424 [2024-10-15 09:22:10.257506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:26.424 [2024-10-15 09:22:10.257903] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:26.424 [2024-10-15 09:22:10.257935] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:26.424 request: 00:22:26.424 { 00:22:26.424 "base_bdev": "BaseBdev1", 00:22:26.424 "raid_bdev": "raid_bdev1", 00:22:26.424 "method": "bdev_raid_add_base_bdev", 00:22:26.424 "req_id": 1 00:22:26.424 } 00:22:26.424 Got JSON-RPC error response 00:22:26.424 response: 00:22:26.424 { 00:22:26.424 "code": -22, 00:22:26.424 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:26.424 } 00:22:26.424 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:26.424 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:22:26.424 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:26.424 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:26.424 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:26.424 09:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:27.360 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:27.360 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:27.360 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:27.360 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:27.360 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:27.360 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:27.360 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.360 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.360 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.360 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.360 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.360 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.360 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.360 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.619 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.619 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.619 "name": "raid_bdev1", 00:22:27.619 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:27.619 "strip_size_kb": 64, 00:22:27.619 "state": "online", 00:22:27.619 "raid_level": "raid5f", 00:22:27.619 "superblock": true, 00:22:27.619 "num_base_bdevs": 3, 00:22:27.619 "num_base_bdevs_discovered": 2, 00:22:27.619 "num_base_bdevs_operational": 2, 00:22:27.619 "base_bdevs_list": [ 00:22:27.619 { 00:22:27.619 "name": null, 00:22:27.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.619 "is_configured": false, 00:22:27.619 "data_offset": 0, 00:22:27.619 "data_size": 63488 00:22:27.619 }, 00:22:27.619 { 00:22:27.619 "name": "BaseBdev2", 00:22:27.619 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:27.619 "is_configured": true, 00:22:27.619 "data_offset": 2048, 00:22:27.619 "data_size": 63488 00:22:27.619 }, 00:22:27.619 { 00:22:27.619 "name": "BaseBdev3", 00:22:27.619 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:27.619 "is_configured": true, 00:22:27.619 "data_offset": 2048, 00:22:27.619 "data_size": 63488 00:22:27.619 } 00:22:27.619 ] 00:22:27.619 }' 00:22:27.619 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.619 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.878 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:27.878 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:27.878 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:27.878 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:27.878 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:27.878 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.878 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.878 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.878 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.138 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.138 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:28.138 "name": "raid_bdev1", 00:22:28.138 "uuid": "c60ff67f-5347-48dd-98e3-f6a5bb49962a", 00:22:28.138 "strip_size_kb": 64, 00:22:28.138 "state": "online", 00:22:28.138 "raid_level": "raid5f", 00:22:28.138 "superblock": true, 00:22:28.138 "num_base_bdevs": 3, 00:22:28.138 "num_base_bdevs_discovered": 2, 00:22:28.138 "num_base_bdevs_operational": 2, 00:22:28.138 "base_bdevs_list": [ 00:22:28.138 { 00:22:28.138 "name": null, 00:22:28.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.138 "is_configured": false, 00:22:28.138 "data_offset": 0, 00:22:28.138 "data_size": 63488 00:22:28.138 }, 00:22:28.138 { 00:22:28.138 "name": "BaseBdev2", 00:22:28.138 "uuid": "bf15553f-420d-50f6-88bf-e8cb70f0ae7b", 00:22:28.138 "is_configured": true, 00:22:28.138 "data_offset": 2048, 00:22:28.138 "data_size": 63488 00:22:28.138 }, 00:22:28.138 { 00:22:28.138 "name": "BaseBdev3", 00:22:28.138 "uuid": "cbee0d16-c325-5fb6-955a-4a0b2f549260", 00:22:28.138 "is_configured": true, 00:22:28.138 "data_offset": 2048, 00:22:28.138 "data_size": 63488 00:22:28.138 } 00:22:28.138 ] 00:22:28.138 }' 00:22:28.138 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:28.138 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:28.138 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:28.138 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:28.138 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82698 00:22:28.138 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82698 ']' 00:22:28.138 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 82698 00:22:28.138 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:22:28.138 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:28.138 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82698 00:22:28.138 killing process with pid 82698 00:22:28.138 Received shutdown signal, test time was about 60.000000 seconds 00:22:28.138 00:22:28.138 Latency(us) 00:22:28.138 [2024-10-15T09:22:12.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.138 [2024-10-15T09:22:12.066Z] =================================================================================================================== 00:22:28.138 [2024-10-15T09:22:12.066Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:28.138 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:28.138 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:28.138 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82698' 00:22:28.138 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 82698 00:22:28.138 [2024-10-15 09:22:12.001047] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:28.138 09:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 82698 00:22:28.138 [2024-10-15 09:22:12.001264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:28.138 [2024-10-15 09:22:12.001354] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:28.138 [2024-10-15 09:22:12.001375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:28.707 [2024-10-15 09:22:12.379293] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:29.653 09:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:22:29.654 ************************************ 00:22:29.654 END TEST raid5f_rebuild_test_sb 00:22:29.654 ************************************ 00:22:29.654 00:22:29.654 real 0m25.371s 00:22:29.654 user 0m33.740s 00:22:29.654 sys 0m2.859s 00:22:29.654 09:22:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:29.654 09:22:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.654 09:22:13 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:22:29.654 09:22:13 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:22:29.654 09:22:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:29.654 09:22:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:29.654 09:22:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:29.654 ************************************ 00:22:29.654 START TEST raid5f_state_function_test 00:22:29.654 ************************************ 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83465 00:22:29.654 Process raid pid: 83465 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83465' 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83465 00:22:29.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 83465 ']' 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:29.654 09:22:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.932 [2024-10-15 09:22:13.652885] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:22:29.932 [2024-10-15 09:22:13.653288] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.932 [2024-10-15 09:22:13.823542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.191 [2024-10-15 09:22:13.977907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.449 [2024-10-15 09:22:14.217393] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:30.449 [2024-10-15 09:22:14.217465] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.017 [2024-10-15 09:22:14.667576] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:31.017 [2024-10-15 09:22:14.667818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:31.017 [2024-10-15 09:22:14.667942] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:31.017 [2024-10-15 09:22:14.667977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:31.017 [2024-10-15 09:22:14.667990] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:31.017 [2024-10-15 09:22:14.668006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:31.017 [2024-10-15 09:22:14.668016] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:31.017 [2024-10-15 09:22:14.668031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.017 09:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.017 "name": "Existed_Raid", 00:22:31.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.017 "strip_size_kb": 64, 00:22:31.017 "state": "configuring", 00:22:31.017 "raid_level": "raid5f", 00:22:31.017 "superblock": false, 00:22:31.017 "num_base_bdevs": 4, 00:22:31.018 "num_base_bdevs_discovered": 0, 00:22:31.018 "num_base_bdevs_operational": 4, 00:22:31.018 "base_bdevs_list": [ 00:22:31.018 { 00:22:31.018 "name": "BaseBdev1", 00:22:31.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.018 "is_configured": false, 00:22:31.018 "data_offset": 0, 00:22:31.018 "data_size": 0 00:22:31.018 }, 00:22:31.018 { 00:22:31.018 "name": "BaseBdev2", 00:22:31.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.018 "is_configured": false, 00:22:31.018 "data_offset": 0, 00:22:31.018 "data_size": 0 00:22:31.018 }, 00:22:31.018 { 00:22:31.018 "name": "BaseBdev3", 00:22:31.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.018 "is_configured": false, 00:22:31.018 "data_offset": 0, 00:22:31.018 "data_size": 0 00:22:31.018 }, 00:22:31.018 { 00:22:31.018 "name": "BaseBdev4", 00:22:31.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.018 "is_configured": false, 00:22:31.018 "data_offset": 0, 00:22:31.018 "data_size": 0 00:22:31.018 } 00:22:31.018 ] 00:22:31.018 }' 00:22:31.018 09:22:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.018 09:22:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.584 [2024-10-15 09:22:15.223716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:31.584 [2024-10-15 09:22:15.223776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.584 [2024-10-15 09:22:15.235738] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:31.584 [2024-10-15 09:22:15.235987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:31.584 [2024-10-15 09:22:15.236146] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:31.584 [2024-10-15 09:22:15.236212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:31.584 [2024-10-15 09:22:15.236333] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:31.584 [2024-10-15 09:22:15.236399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:31.584 [2024-10-15 09:22:15.236444] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:31.584 [2024-10-15 09:22:15.236577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.584 [2024-10-15 09:22:15.286072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:31.584 BaseBdev1 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.584 [ 00:22:31.584 { 00:22:31.584 "name": "BaseBdev1", 00:22:31.584 "aliases": [ 00:22:31.584 "00b9b726-deb7-43ed-a824-61253a146be5" 00:22:31.584 ], 00:22:31.584 "product_name": "Malloc disk", 00:22:31.584 "block_size": 512, 00:22:31.584 "num_blocks": 65536, 00:22:31.584 "uuid": "00b9b726-deb7-43ed-a824-61253a146be5", 00:22:31.584 "assigned_rate_limits": { 00:22:31.584 "rw_ios_per_sec": 0, 00:22:31.584 "rw_mbytes_per_sec": 0, 00:22:31.584 "r_mbytes_per_sec": 0, 00:22:31.584 "w_mbytes_per_sec": 0 00:22:31.584 }, 00:22:31.584 "claimed": true, 00:22:31.584 "claim_type": "exclusive_write", 00:22:31.584 "zoned": false, 00:22:31.584 "supported_io_types": { 00:22:31.584 "read": true, 00:22:31.584 "write": true, 00:22:31.584 "unmap": true, 00:22:31.584 "flush": true, 00:22:31.584 "reset": true, 00:22:31.584 "nvme_admin": false, 00:22:31.584 "nvme_io": false, 00:22:31.584 "nvme_io_md": false, 00:22:31.584 "write_zeroes": true, 00:22:31.584 "zcopy": true, 00:22:31.584 "get_zone_info": false, 00:22:31.584 "zone_management": false, 00:22:31.584 "zone_append": false, 00:22:31.584 "compare": false, 00:22:31.584 "compare_and_write": false, 00:22:31.584 "abort": true, 00:22:31.584 "seek_hole": false, 00:22:31.584 "seek_data": false, 00:22:31.584 "copy": true, 00:22:31.584 "nvme_iov_md": false 00:22:31.584 }, 00:22:31.584 "memory_domains": [ 00:22:31.584 { 00:22:31.584 "dma_device_id": "system", 00:22:31.584 "dma_device_type": 1 00:22:31.584 }, 00:22:31.584 { 00:22:31.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.584 "dma_device_type": 2 00:22:31.584 } 00:22:31.584 ], 00:22:31.584 "driver_specific": {} 00:22:31.584 } 00:22:31.584 ] 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.584 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.585 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.585 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.585 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.585 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.585 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.585 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.585 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.585 "name": "Existed_Raid", 00:22:31.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.585 "strip_size_kb": 64, 00:22:31.585 "state": "configuring", 00:22:31.585 "raid_level": "raid5f", 00:22:31.585 "superblock": false, 00:22:31.585 "num_base_bdevs": 4, 00:22:31.585 "num_base_bdevs_discovered": 1, 00:22:31.585 "num_base_bdevs_operational": 4, 00:22:31.585 "base_bdevs_list": [ 00:22:31.585 { 00:22:31.585 "name": "BaseBdev1", 00:22:31.585 "uuid": "00b9b726-deb7-43ed-a824-61253a146be5", 00:22:31.585 "is_configured": true, 00:22:31.585 "data_offset": 0, 00:22:31.585 "data_size": 65536 00:22:31.585 }, 00:22:31.585 { 00:22:31.585 "name": "BaseBdev2", 00:22:31.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.585 "is_configured": false, 00:22:31.585 "data_offset": 0, 00:22:31.585 "data_size": 0 00:22:31.585 }, 00:22:31.585 { 00:22:31.585 "name": "BaseBdev3", 00:22:31.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.585 "is_configured": false, 00:22:31.585 "data_offset": 0, 00:22:31.585 "data_size": 0 00:22:31.585 }, 00:22:31.585 { 00:22:31.585 "name": "BaseBdev4", 00:22:31.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.585 "is_configured": false, 00:22:31.585 "data_offset": 0, 00:22:31.585 "data_size": 0 00:22:31.585 } 00:22:31.585 ] 00:22:31.585 }' 00:22:31.585 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.585 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.152 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:32.152 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.152 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.152 [2024-10-15 09:22:15.834394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:32.152 [2024-10-15 09:22:15.834609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:32.152 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.152 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:32.152 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.152 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.152 [2024-10-15 09:22:15.842412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:32.152 [2024-10-15 09:22:15.845264] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:32.152 [2024-10-15 09:22:15.845457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:32.152 [2024-10-15 09:22:15.845484] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:32.152 [2024-10-15 09:22:15.845504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:32.152 [2024-10-15 09:22:15.845514] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:32.152 [2024-10-15 09:22:15.845533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:32.152 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.152 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:32.153 "name": "Existed_Raid", 00:22:32.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.153 "strip_size_kb": 64, 00:22:32.153 "state": "configuring", 00:22:32.153 "raid_level": "raid5f", 00:22:32.153 "superblock": false, 00:22:32.153 "num_base_bdevs": 4, 00:22:32.153 "num_base_bdevs_discovered": 1, 00:22:32.153 "num_base_bdevs_operational": 4, 00:22:32.153 "base_bdevs_list": [ 00:22:32.153 { 00:22:32.153 "name": "BaseBdev1", 00:22:32.153 "uuid": "00b9b726-deb7-43ed-a824-61253a146be5", 00:22:32.153 "is_configured": true, 00:22:32.153 "data_offset": 0, 00:22:32.153 "data_size": 65536 00:22:32.153 }, 00:22:32.153 { 00:22:32.153 "name": "BaseBdev2", 00:22:32.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.153 "is_configured": false, 00:22:32.153 "data_offset": 0, 00:22:32.153 "data_size": 0 00:22:32.153 }, 00:22:32.153 { 00:22:32.153 "name": "BaseBdev3", 00:22:32.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.153 "is_configured": false, 00:22:32.153 "data_offset": 0, 00:22:32.153 "data_size": 0 00:22:32.153 }, 00:22:32.153 { 00:22:32.153 "name": "BaseBdev4", 00:22:32.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.153 "is_configured": false, 00:22:32.153 "data_offset": 0, 00:22:32.153 "data_size": 0 00:22:32.153 } 00:22:32.153 ] 00:22:32.153 }' 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:32.153 09:22:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.721 [2024-10-15 09:22:16.412845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:32.721 BaseBdev2 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.721 [ 00:22:32.721 { 00:22:32.721 "name": "BaseBdev2", 00:22:32.721 "aliases": [ 00:22:32.721 "8573bfc1-f677-40a3-ad59-de8723a8387f" 00:22:32.721 ], 00:22:32.721 "product_name": "Malloc disk", 00:22:32.721 "block_size": 512, 00:22:32.721 "num_blocks": 65536, 00:22:32.721 "uuid": "8573bfc1-f677-40a3-ad59-de8723a8387f", 00:22:32.721 "assigned_rate_limits": { 00:22:32.721 "rw_ios_per_sec": 0, 00:22:32.721 "rw_mbytes_per_sec": 0, 00:22:32.721 "r_mbytes_per_sec": 0, 00:22:32.721 "w_mbytes_per_sec": 0 00:22:32.721 }, 00:22:32.721 "claimed": true, 00:22:32.721 "claim_type": "exclusive_write", 00:22:32.721 "zoned": false, 00:22:32.721 "supported_io_types": { 00:22:32.721 "read": true, 00:22:32.721 "write": true, 00:22:32.721 "unmap": true, 00:22:32.721 "flush": true, 00:22:32.721 "reset": true, 00:22:32.721 "nvme_admin": false, 00:22:32.721 "nvme_io": false, 00:22:32.721 "nvme_io_md": false, 00:22:32.721 "write_zeroes": true, 00:22:32.721 "zcopy": true, 00:22:32.721 "get_zone_info": false, 00:22:32.721 "zone_management": false, 00:22:32.721 "zone_append": false, 00:22:32.721 "compare": false, 00:22:32.721 "compare_and_write": false, 00:22:32.721 "abort": true, 00:22:32.721 "seek_hole": false, 00:22:32.721 "seek_data": false, 00:22:32.721 "copy": true, 00:22:32.721 "nvme_iov_md": false 00:22:32.721 }, 00:22:32.721 "memory_domains": [ 00:22:32.721 { 00:22:32.721 "dma_device_id": "system", 00:22:32.721 "dma_device_type": 1 00:22:32.721 }, 00:22:32.721 { 00:22:32.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.721 "dma_device_type": 2 00:22:32.721 } 00:22:32.721 ], 00:22:32.721 "driver_specific": {} 00:22:32.721 } 00:22:32.721 ] 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.721 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:32.721 "name": "Existed_Raid", 00:22:32.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.721 "strip_size_kb": 64, 00:22:32.722 "state": "configuring", 00:22:32.722 "raid_level": "raid5f", 00:22:32.722 "superblock": false, 00:22:32.722 "num_base_bdevs": 4, 00:22:32.722 "num_base_bdevs_discovered": 2, 00:22:32.722 "num_base_bdevs_operational": 4, 00:22:32.722 "base_bdevs_list": [ 00:22:32.722 { 00:22:32.722 "name": "BaseBdev1", 00:22:32.722 "uuid": "00b9b726-deb7-43ed-a824-61253a146be5", 00:22:32.722 "is_configured": true, 00:22:32.722 "data_offset": 0, 00:22:32.722 "data_size": 65536 00:22:32.722 }, 00:22:32.722 { 00:22:32.722 "name": "BaseBdev2", 00:22:32.722 "uuid": "8573bfc1-f677-40a3-ad59-de8723a8387f", 00:22:32.722 "is_configured": true, 00:22:32.722 "data_offset": 0, 00:22:32.722 "data_size": 65536 00:22:32.722 }, 00:22:32.722 { 00:22:32.722 "name": "BaseBdev3", 00:22:32.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.722 "is_configured": false, 00:22:32.722 "data_offset": 0, 00:22:32.722 "data_size": 0 00:22:32.722 }, 00:22:32.722 { 00:22:32.722 "name": "BaseBdev4", 00:22:32.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.722 "is_configured": false, 00:22:32.722 "data_offset": 0, 00:22:32.722 "data_size": 0 00:22:32.722 } 00:22:32.722 ] 00:22:32.722 }' 00:22:32.722 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:32.722 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.311 09:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:33.311 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.311 09:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.311 [2024-10-15 09:22:17.030589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:33.311 BaseBdev3 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.311 [ 00:22:33.311 { 00:22:33.311 "name": "BaseBdev3", 00:22:33.311 "aliases": [ 00:22:33.311 "a6e2245e-1443-444e-88bb-82c59882466d" 00:22:33.311 ], 00:22:33.311 "product_name": "Malloc disk", 00:22:33.311 "block_size": 512, 00:22:33.311 "num_blocks": 65536, 00:22:33.311 "uuid": "a6e2245e-1443-444e-88bb-82c59882466d", 00:22:33.311 "assigned_rate_limits": { 00:22:33.311 "rw_ios_per_sec": 0, 00:22:33.311 "rw_mbytes_per_sec": 0, 00:22:33.311 "r_mbytes_per_sec": 0, 00:22:33.311 "w_mbytes_per_sec": 0 00:22:33.311 }, 00:22:33.311 "claimed": true, 00:22:33.311 "claim_type": "exclusive_write", 00:22:33.311 "zoned": false, 00:22:33.311 "supported_io_types": { 00:22:33.311 "read": true, 00:22:33.311 "write": true, 00:22:33.311 "unmap": true, 00:22:33.311 "flush": true, 00:22:33.311 "reset": true, 00:22:33.311 "nvme_admin": false, 00:22:33.311 "nvme_io": false, 00:22:33.311 "nvme_io_md": false, 00:22:33.311 "write_zeroes": true, 00:22:33.311 "zcopy": true, 00:22:33.311 "get_zone_info": false, 00:22:33.311 "zone_management": false, 00:22:33.311 "zone_append": false, 00:22:33.311 "compare": false, 00:22:33.311 "compare_and_write": false, 00:22:33.311 "abort": true, 00:22:33.311 "seek_hole": false, 00:22:33.311 "seek_data": false, 00:22:33.311 "copy": true, 00:22:33.311 "nvme_iov_md": false 00:22:33.311 }, 00:22:33.311 "memory_domains": [ 00:22:33.311 { 00:22:33.311 "dma_device_id": "system", 00:22:33.311 "dma_device_type": 1 00:22:33.311 }, 00:22:33.311 { 00:22:33.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.311 "dma_device_type": 2 00:22:33.311 } 00:22:33.311 ], 00:22:33.311 "driver_specific": {} 00:22:33.311 } 00:22:33.311 ] 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.311 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:33.311 "name": "Existed_Raid", 00:22:33.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.311 "strip_size_kb": 64, 00:22:33.311 "state": "configuring", 00:22:33.311 "raid_level": "raid5f", 00:22:33.311 "superblock": false, 00:22:33.311 "num_base_bdevs": 4, 00:22:33.311 "num_base_bdevs_discovered": 3, 00:22:33.311 "num_base_bdevs_operational": 4, 00:22:33.311 "base_bdevs_list": [ 00:22:33.311 { 00:22:33.311 "name": "BaseBdev1", 00:22:33.311 "uuid": "00b9b726-deb7-43ed-a824-61253a146be5", 00:22:33.311 "is_configured": true, 00:22:33.311 "data_offset": 0, 00:22:33.311 "data_size": 65536 00:22:33.311 }, 00:22:33.311 { 00:22:33.311 "name": "BaseBdev2", 00:22:33.311 "uuid": "8573bfc1-f677-40a3-ad59-de8723a8387f", 00:22:33.311 "is_configured": true, 00:22:33.311 "data_offset": 0, 00:22:33.311 "data_size": 65536 00:22:33.311 }, 00:22:33.311 { 00:22:33.311 "name": "BaseBdev3", 00:22:33.311 "uuid": "a6e2245e-1443-444e-88bb-82c59882466d", 00:22:33.311 "is_configured": true, 00:22:33.311 "data_offset": 0, 00:22:33.311 "data_size": 65536 00:22:33.311 }, 00:22:33.311 { 00:22:33.311 "name": "BaseBdev4", 00:22:33.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.311 "is_configured": false, 00:22:33.311 "data_offset": 0, 00:22:33.311 "data_size": 0 00:22:33.311 } 00:22:33.311 ] 00:22:33.312 }' 00:22:33.312 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:33.312 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.879 [2024-10-15 09:22:17.621491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:33.879 [2024-10-15 09:22:17.621781] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:33.879 [2024-10-15 09:22:17.621806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:33.879 [2024-10-15 09:22:17.622181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:33.879 [2024-10-15 09:22:17.629836] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:33.879 [2024-10-15 09:22:17.629980] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:33.879 [2024-10-15 09:22:17.630532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.879 BaseBdev4 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.879 [ 00:22:33.879 { 00:22:33.879 "name": "BaseBdev4", 00:22:33.879 "aliases": [ 00:22:33.879 "8c3f2b00-4021-4fc2-834b-945e69d5d287" 00:22:33.879 ], 00:22:33.879 "product_name": "Malloc disk", 00:22:33.879 "block_size": 512, 00:22:33.879 "num_blocks": 65536, 00:22:33.879 "uuid": "8c3f2b00-4021-4fc2-834b-945e69d5d287", 00:22:33.879 "assigned_rate_limits": { 00:22:33.879 "rw_ios_per_sec": 0, 00:22:33.879 "rw_mbytes_per_sec": 0, 00:22:33.879 "r_mbytes_per_sec": 0, 00:22:33.879 "w_mbytes_per_sec": 0 00:22:33.879 }, 00:22:33.879 "claimed": true, 00:22:33.879 "claim_type": "exclusive_write", 00:22:33.879 "zoned": false, 00:22:33.879 "supported_io_types": { 00:22:33.879 "read": true, 00:22:33.879 "write": true, 00:22:33.879 "unmap": true, 00:22:33.879 "flush": true, 00:22:33.879 "reset": true, 00:22:33.879 "nvme_admin": false, 00:22:33.879 "nvme_io": false, 00:22:33.879 "nvme_io_md": false, 00:22:33.879 "write_zeroes": true, 00:22:33.879 "zcopy": true, 00:22:33.879 "get_zone_info": false, 00:22:33.879 "zone_management": false, 00:22:33.879 "zone_append": false, 00:22:33.879 "compare": false, 00:22:33.879 "compare_and_write": false, 00:22:33.879 "abort": true, 00:22:33.879 "seek_hole": false, 00:22:33.879 "seek_data": false, 00:22:33.879 "copy": true, 00:22:33.879 "nvme_iov_md": false 00:22:33.879 }, 00:22:33.879 "memory_domains": [ 00:22:33.879 { 00:22:33.879 "dma_device_id": "system", 00:22:33.879 "dma_device_type": 1 00:22:33.879 }, 00:22:33.879 { 00:22:33.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.879 "dma_device_type": 2 00:22:33.879 } 00:22:33.879 ], 00:22:33.879 "driver_specific": {} 00:22:33.879 } 00:22:33.879 ] 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:33.879 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:33.880 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:33.880 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:33.880 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:33.880 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.880 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.880 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.880 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.880 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.880 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:33.880 "name": "Existed_Raid", 00:22:33.880 "uuid": "71232a72-6639-4963-9abf-961755511b29", 00:22:33.880 "strip_size_kb": 64, 00:22:33.880 "state": "online", 00:22:33.880 "raid_level": "raid5f", 00:22:33.880 "superblock": false, 00:22:33.880 "num_base_bdevs": 4, 00:22:33.880 "num_base_bdevs_discovered": 4, 00:22:33.880 "num_base_bdevs_operational": 4, 00:22:33.880 "base_bdevs_list": [ 00:22:33.880 { 00:22:33.880 "name": "BaseBdev1", 00:22:33.880 "uuid": "00b9b726-deb7-43ed-a824-61253a146be5", 00:22:33.880 "is_configured": true, 00:22:33.880 "data_offset": 0, 00:22:33.880 "data_size": 65536 00:22:33.880 }, 00:22:33.880 { 00:22:33.880 "name": "BaseBdev2", 00:22:33.880 "uuid": "8573bfc1-f677-40a3-ad59-de8723a8387f", 00:22:33.880 "is_configured": true, 00:22:33.880 "data_offset": 0, 00:22:33.880 "data_size": 65536 00:22:33.880 }, 00:22:33.880 { 00:22:33.880 "name": "BaseBdev3", 00:22:33.880 "uuid": "a6e2245e-1443-444e-88bb-82c59882466d", 00:22:33.880 "is_configured": true, 00:22:33.880 "data_offset": 0, 00:22:33.880 "data_size": 65536 00:22:33.880 }, 00:22:33.880 { 00:22:33.880 "name": "BaseBdev4", 00:22:33.880 "uuid": "8c3f2b00-4021-4fc2-834b-945e69d5d287", 00:22:33.880 "is_configured": true, 00:22:33.880 "data_offset": 0, 00:22:33.880 "data_size": 65536 00:22:33.880 } 00:22:33.880 ] 00:22:33.880 }' 00:22:33.880 09:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:33.880 09:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.447 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:34.447 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:34.447 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:34.447 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:34.447 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:34.447 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:34.447 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:34.447 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:34.447 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.447 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.447 [2024-10-15 09:22:18.203147] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:34.447 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.447 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:34.447 "name": "Existed_Raid", 00:22:34.447 "aliases": [ 00:22:34.447 "71232a72-6639-4963-9abf-961755511b29" 00:22:34.447 ], 00:22:34.447 "product_name": "Raid Volume", 00:22:34.447 "block_size": 512, 00:22:34.447 "num_blocks": 196608, 00:22:34.447 "uuid": "71232a72-6639-4963-9abf-961755511b29", 00:22:34.447 "assigned_rate_limits": { 00:22:34.447 "rw_ios_per_sec": 0, 00:22:34.447 "rw_mbytes_per_sec": 0, 00:22:34.447 "r_mbytes_per_sec": 0, 00:22:34.448 "w_mbytes_per_sec": 0 00:22:34.448 }, 00:22:34.448 "claimed": false, 00:22:34.448 "zoned": false, 00:22:34.448 "supported_io_types": { 00:22:34.448 "read": true, 00:22:34.448 "write": true, 00:22:34.448 "unmap": false, 00:22:34.448 "flush": false, 00:22:34.448 "reset": true, 00:22:34.448 "nvme_admin": false, 00:22:34.448 "nvme_io": false, 00:22:34.448 "nvme_io_md": false, 00:22:34.448 "write_zeroes": true, 00:22:34.448 "zcopy": false, 00:22:34.448 "get_zone_info": false, 00:22:34.448 "zone_management": false, 00:22:34.448 "zone_append": false, 00:22:34.448 "compare": false, 00:22:34.448 "compare_and_write": false, 00:22:34.448 "abort": false, 00:22:34.448 "seek_hole": false, 00:22:34.448 "seek_data": false, 00:22:34.448 "copy": false, 00:22:34.448 "nvme_iov_md": false 00:22:34.448 }, 00:22:34.448 "driver_specific": { 00:22:34.448 "raid": { 00:22:34.448 "uuid": "71232a72-6639-4963-9abf-961755511b29", 00:22:34.448 "strip_size_kb": 64, 00:22:34.448 "state": "online", 00:22:34.448 "raid_level": "raid5f", 00:22:34.448 "superblock": false, 00:22:34.448 "num_base_bdevs": 4, 00:22:34.448 "num_base_bdevs_discovered": 4, 00:22:34.448 "num_base_bdevs_operational": 4, 00:22:34.448 "base_bdevs_list": [ 00:22:34.448 { 00:22:34.448 "name": "BaseBdev1", 00:22:34.448 "uuid": "00b9b726-deb7-43ed-a824-61253a146be5", 00:22:34.448 "is_configured": true, 00:22:34.448 "data_offset": 0, 00:22:34.448 "data_size": 65536 00:22:34.448 }, 00:22:34.448 { 00:22:34.448 "name": "BaseBdev2", 00:22:34.448 "uuid": "8573bfc1-f677-40a3-ad59-de8723a8387f", 00:22:34.448 "is_configured": true, 00:22:34.448 "data_offset": 0, 00:22:34.448 "data_size": 65536 00:22:34.448 }, 00:22:34.448 { 00:22:34.448 "name": "BaseBdev3", 00:22:34.448 "uuid": "a6e2245e-1443-444e-88bb-82c59882466d", 00:22:34.448 "is_configured": true, 00:22:34.448 "data_offset": 0, 00:22:34.448 "data_size": 65536 00:22:34.448 }, 00:22:34.448 { 00:22:34.448 "name": "BaseBdev4", 00:22:34.448 "uuid": "8c3f2b00-4021-4fc2-834b-945e69d5d287", 00:22:34.448 "is_configured": true, 00:22:34.448 "data_offset": 0, 00:22:34.448 "data_size": 65536 00:22:34.448 } 00:22:34.448 ] 00:22:34.448 } 00:22:34.448 } 00:22:34.448 }' 00:22:34.448 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:34.448 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:34.448 BaseBdev2 00:22:34.448 BaseBdev3 00:22:34.448 BaseBdev4' 00:22:34.448 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:34.448 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:34.448 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:34.448 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:34.448 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:34.448 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.448 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.448 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.708 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.708 [2024-10-15 09:22:18.575108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.967 "name": "Existed_Raid", 00:22:34.967 "uuid": "71232a72-6639-4963-9abf-961755511b29", 00:22:34.967 "strip_size_kb": 64, 00:22:34.967 "state": "online", 00:22:34.967 "raid_level": "raid5f", 00:22:34.967 "superblock": false, 00:22:34.967 "num_base_bdevs": 4, 00:22:34.967 "num_base_bdevs_discovered": 3, 00:22:34.967 "num_base_bdevs_operational": 3, 00:22:34.967 "base_bdevs_list": [ 00:22:34.967 { 00:22:34.967 "name": null, 00:22:34.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.967 "is_configured": false, 00:22:34.967 "data_offset": 0, 00:22:34.967 "data_size": 65536 00:22:34.967 }, 00:22:34.967 { 00:22:34.967 "name": "BaseBdev2", 00:22:34.967 "uuid": "8573bfc1-f677-40a3-ad59-de8723a8387f", 00:22:34.967 "is_configured": true, 00:22:34.967 "data_offset": 0, 00:22:34.967 "data_size": 65536 00:22:34.967 }, 00:22:34.967 { 00:22:34.967 "name": "BaseBdev3", 00:22:34.967 "uuid": "a6e2245e-1443-444e-88bb-82c59882466d", 00:22:34.967 "is_configured": true, 00:22:34.967 "data_offset": 0, 00:22:34.967 "data_size": 65536 00:22:34.967 }, 00:22:34.967 { 00:22:34.967 "name": "BaseBdev4", 00:22:34.967 "uuid": "8c3f2b00-4021-4fc2-834b-945e69d5d287", 00:22:34.967 "is_configured": true, 00:22:34.967 "data_offset": 0, 00:22:34.967 "data_size": 65536 00:22:34.967 } 00:22:34.967 ] 00:22:34.967 }' 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.967 09:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.535 [2024-10-15 09:22:19.283636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:35.535 [2024-10-15 09:22:19.283945] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:35.535 [2024-10-15 09:22:19.380207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.535 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:35.536 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:35.536 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:35.536 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.536 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.536 [2024-10-15 09:22:19.444276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.796 [2024-10-15 09:22:19.602262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:35.796 [2024-10-15 09:22:19.602488] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.796 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.056 BaseBdev2 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.056 [ 00:22:36.056 { 00:22:36.056 "name": "BaseBdev2", 00:22:36.056 "aliases": [ 00:22:36.056 "12e91b76-3034-40f6-ac85-562e4ffb7916" 00:22:36.056 ], 00:22:36.056 "product_name": "Malloc disk", 00:22:36.056 "block_size": 512, 00:22:36.056 "num_blocks": 65536, 00:22:36.056 "uuid": "12e91b76-3034-40f6-ac85-562e4ffb7916", 00:22:36.056 "assigned_rate_limits": { 00:22:36.056 "rw_ios_per_sec": 0, 00:22:36.056 "rw_mbytes_per_sec": 0, 00:22:36.056 "r_mbytes_per_sec": 0, 00:22:36.056 "w_mbytes_per_sec": 0 00:22:36.056 }, 00:22:36.056 "claimed": false, 00:22:36.056 "zoned": false, 00:22:36.056 "supported_io_types": { 00:22:36.056 "read": true, 00:22:36.056 "write": true, 00:22:36.056 "unmap": true, 00:22:36.056 "flush": true, 00:22:36.056 "reset": true, 00:22:36.056 "nvme_admin": false, 00:22:36.056 "nvme_io": false, 00:22:36.056 "nvme_io_md": false, 00:22:36.056 "write_zeroes": true, 00:22:36.056 "zcopy": true, 00:22:36.056 "get_zone_info": false, 00:22:36.056 "zone_management": false, 00:22:36.056 "zone_append": false, 00:22:36.056 "compare": false, 00:22:36.056 "compare_and_write": false, 00:22:36.056 "abort": true, 00:22:36.056 "seek_hole": false, 00:22:36.056 "seek_data": false, 00:22:36.056 "copy": true, 00:22:36.056 "nvme_iov_md": false 00:22:36.056 }, 00:22:36.056 "memory_domains": [ 00:22:36.056 { 00:22:36.056 "dma_device_id": "system", 00:22:36.056 "dma_device_type": 1 00:22:36.056 }, 00:22:36.056 { 00:22:36.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.056 "dma_device_type": 2 00:22:36.056 } 00:22:36.056 ], 00:22:36.056 "driver_specific": {} 00:22:36.056 } 00:22:36.056 ] 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.056 BaseBdev3 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.056 [ 00:22:36.056 { 00:22:36.056 "name": "BaseBdev3", 00:22:36.056 "aliases": [ 00:22:36.056 "a30a6bff-64dc-4af8-955f-d35076914de9" 00:22:36.056 ], 00:22:36.056 "product_name": "Malloc disk", 00:22:36.056 "block_size": 512, 00:22:36.056 "num_blocks": 65536, 00:22:36.056 "uuid": "a30a6bff-64dc-4af8-955f-d35076914de9", 00:22:36.056 "assigned_rate_limits": { 00:22:36.056 "rw_ios_per_sec": 0, 00:22:36.056 "rw_mbytes_per_sec": 0, 00:22:36.056 "r_mbytes_per_sec": 0, 00:22:36.056 "w_mbytes_per_sec": 0 00:22:36.056 }, 00:22:36.056 "claimed": false, 00:22:36.056 "zoned": false, 00:22:36.056 "supported_io_types": { 00:22:36.056 "read": true, 00:22:36.056 "write": true, 00:22:36.056 "unmap": true, 00:22:36.056 "flush": true, 00:22:36.056 "reset": true, 00:22:36.056 "nvme_admin": false, 00:22:36.056 "nvme_io": false, 00:22:36.056 "nvme_io_md": false, 00:22:36.056 "write_zeroes": true, 00:22:36.056 "zcopy": true, 00:22:36.056 "get_zone_info": false, 00:22:36.056 "zone_management": false, 00:22:36.056 "zone_append": false, 00:22:36.056 "compare": false, 00:22:36.056 "compare_and_write": false, 00:22:36.056 "abort": true, 00:22:36.056 "seek_hole": false, 00:22:36.056 "seek_data": false, 00:22:36.056 "copy": true, 00:22:36.056 "nvme_iov_md": false 00:22:36.056 }, 00:22:36.056 "memory_domains": [ 00:22:36.056 { 00:22:36.056 "dma_device_id": "system", 00:22:36.056 "dma_device_type": 1 00:22:36.056 }, 00:22:36.056 { 00:22:36.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.056 "dma_device_type": 2 00:22:36.056 } 00:22:36.056 ], 00:22:36.056 "driver_specific": {} 00:22:36.056 } 00:22:36.056 ] 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:36.056 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:36.057 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:36.057 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:36.057 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.057 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.057 BaseBdev4 00:22:36.057 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.057 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:22:36.057 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:22:36.057 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:36.057 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:36.057 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:36.057 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:36.057 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:22:36.057 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.057 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.057 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.057 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:36.057 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.057 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.057 [ 00:22:36.057 { 00:22:36.057 "name": "BaseBdev4", 00:22:36.057 "aliases": [ 00:22:36.057 "af9ab167-4fce-4526-9a17-bfd443c3716c" 00:22:36.057 ], 00:22:36.057 "product_name": "Malloc disk", 00:22:36.057 "block_size": 512, 00:22:36.057 "num_blocks": 65536, 00:22:36.057 "uuid": "af9ab167-4fce-4526-9a17-bfd443c3716c", 00:22:36.057 "assigned_rate_limits": { 00:22:36.057 "rw_ios_per_sec": 0, 00:22:36.057 "rw_mbytes_per_sec": 0, 00:22:36.057 "r_mbytes_per_sec": 0, 00:22:36.057 "w_mbytes_per_sec": 0 00:22:36.057 }, 00:22:36.057 "claimed": false, 00:22:36.057 "zoned": false, 00:22:36.057 "supported_io_types": { 00:22:36.057 "read": true, 00:22:36.057 "write": true, 00:22:36.057 "unmap": true, 00:22:36.057 "flush": true, 00:22:36.057 "reset": true, 00:22:36.057 "nvme_admin": false, 00:22:36.057 "nvme_io": false, 00:22:36.057 "nvme_io_md": false, 00:22:36.057 "write_zeroes": true, 00:22:36.057 "zcopy": true, 00:22:36.057 "get_zone_info": false, 00:22:36.057 "zone_management": false, 00:22:36.057 "zone_append": false, 00:22:36.057 "compare": false, 00:22:36.057 "compare_and_write": false, 00:22:36.057 "abort": true, 00:22:36.057 "seek_hole": false, 00:22:36.057 "seek_data": false, 00:22:36.057 "copy": true, 00:22:36.057 "nvme_iov_md": false 00:22:36.057 }, 00:22:36.057 "memory_domains": [ 00:22:36.057 { 00:22:36.057 "dma_device_id": "system", 00:22:36.057 "dma_device_type": 1 00:22:36.057 }, 00:22:36.057 { 00:22:36.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.057 "dma_device_type": 2 00:22:36.057 } 00:22:36.057 ], 00:22:36.057 "driver_specific": {} 00:22:36.316 } 00:22:36.316 ] 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.316 [2024-10-15 09:22:19.989653] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:36.316 [2024-10-15 09:22:19.989751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:36.316 [2024-10-15 09:22:19.989816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:36.316 [2024-10-15 09:22:19.992475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:36.316 [2024-10-15 09:22:19.992543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.316 09:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.316 09:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.316 09:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:36.316 "name": "Existed_Raid", 00:22:36.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.316 "strip_size_kb": 64, 00:22:36.316 "state": "configuring", 00:22:36.316 "raid_level": "raid5f", 00:22:36.316 "superblock": false, 00:22:36.316 "num_base_bdevs": 4, 00:22:36.316 "num_base_bdevs_discovered": 3, 00:22:36.316 "num_base_bdevs_operational": 4, 00:22:36.316 "base_bdevs_list": [ 00:22:36.316 { 00:22:36.316 "name": "BaseBdev1", 00:22:36.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.316 "is_configured": false, 00:22:36.316 "data_offset": 0, 00:22:36.316 "data_size": 0 00:22:36.316 }, 00:22:36.316 { 00:22:36.316 "name": "BaseBdev2", 00:22:36.316 "uuid": "12e91b76-3034-40f6-ac85-562e4ffb7916", 00:22:36.316 "is_configured": true, 00:22:36.316 "data_offset": 0, 00:22:36.316 "data_size": 65536 00:22:36.316 }, 00:22:36.316 { 00:22:36.316 "name": "BaseBdev3", 00:22:36.316 "uuid": "a30a6bff-64dc-4af8-955f-d35076914de9", 00:22:36.316 "is_configured": true, 00:22:36.316 "data_offset": 0, 00:22:36.316 "data_size": 65536 00:22:36.316 }, 00:22:36.316 { 00:22:36.316 "name": "BaseBdev4", 00:22:36.316 "uuid": "af9ab167-4fce-4526-9a17-bfd443c3716c", 00:22:36.316 "is_configured": true, 00:22:36.316 "data_offset": 0, 00:22:36.316 "data_size": 65536 00:22:36.316 } 00:22:36.316 ] 00:22:36.316 }' 00:22:36.316 09:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:36.316 09:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.884 [2024-10-15 09:22:20.509747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:36.884 "name": "Existed_Raid", 00:22:36.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.884 "strip_size_kb": 64, 00:22:36.884 "state": "configuring", 00:22:36.884 "raid_level": "raid5f", 00:22:36.884 "superblock": false, 00:22:36.884 "num_base_bdevs": 4, 00:22:36.884 "num_base_bdevs_discovered": 2, 00:22:36.884 "num_base_bdevs_operational": 4, 00:22:36.884 "base_bdevs_list": [ 00:22:36.884 { 00:22:36.884 "name": "BaseBdev1", 00:22:36.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.884 "is_configured": false, 00:22:36.884 "data_offset": 0, 00:22:36.884 "data_size": 0 00:22:36.884 }, 00:22:36.884 { 00:22:36.884 "name": null, 00:22:36.884 "uuid": "12e91b76-3034-40f6-ac85-562e4ffb7916", 00:22:36.884 "is_configured": false, 00:22:36.884 "data_offset": 0, 00:22:36.884 "data_size": 65536 00:22:36.884 }, 00:22:36.884 { 00:22:36.884 "name": "BaseBdev3", 00:22:36.884 "uuid": "a30a6bff-64dc-4af8-955f-d35076914de9", 00:22:36.884 "is_configured": true, 00:22:36.884 "data_offset": 0, 00:22:36.884 "data_size": 65536 00:22:36.884 }, 00:22:36.884 { 00:22:36.884 "name": "BaseBdev4", 00:22:36.884 "uuid": "af9ab167-4fce-4526-9a17-bfd443c3716c", 00:22:36.884 "is_configured": true, 00:22:36.884 "data_offset": 0, 00:22:36.884 "data_size": 65536 00:22:36.884 } 00:22:36.884 ] 00:22:36.884 }' 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:36.884 09:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.143 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.143 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:37.143 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.143 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.143 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.402 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:37.402 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:37.402 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.402 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.402 [2024-10-15 09:22:21.151499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:37.402 BaseBdev1 00:22:37.402 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.402 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:37.402 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:37.402 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:37.402 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:37.402 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:37.402 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:37.402 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:22:37.402 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.402 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.402 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.402 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:37.402 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.402 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.402 [ 00:22:37.402 { 00:22:37.402 "name": "BaseBdev1", 00:22:37.402 "aliases": [ 00:22:37.402 "da25763d-0701-418f-994f-bd54a3cb043a" 00:22:37.402 ], 00:22:37.402 "product_name": "Malloc disk", 00:22:37.402 "block_size": 512, 00:22:37.402 "num_blocks": 65536, 00:22:37.402 "uuid": "da25763d-0701-418f-994f-bd54a3cb043a", 00:22:37.402 "assigned_rate_limits": { 00:22:37.402 "rw_ios_per_sec": 0, 00:22:37.402 "rw_mbytes_per_sec": 0, 00:22:37.402 "r_mbytes_per_sec": 0, 00:22:37.402 "w_mbytes_per_sec": 0 00:22:37.402 }, 00:22:37.402 "claimed": true, 00:22:37.402 "claim_type": "exclusive_write", 00:22:37.402 "zoned": false, 00:22:37.402 "supported_io_types": { 00:22:37.402 "read": true, 00:22:37.402 "write": true, 00:22:37.402 "unmap": true, 00:22:37.402 "flush": true, 00:22:37.402 "reset": true, 00:22:37.402 "nvme_admin": false, 00:22:37.402 "nvme_io": false, 00:22:37.402 "nvme_io_md": false, 00:22:37.402 "write_zeroes": true, 00:22:37.402 "zcopy": true, 00:22:37.402 "get_zone_info": false, 00:22:37.402 "zone_management": false, 00:22:37.402 "zone_append": false, 00:22:37.402 "compare": false, 00:22:37.402 "compare_and_write": false, 00:22:37.402 "abort": true, 00:22:37.402 "seek_hole": false, 00:22:37.402 "seek_data": false, 00:22:37.402 "copy": true, 00:22:37.402 "nvme_iov_md": false 00:22:37.402 }, 00:22:37.402 "memory_domains": [ 00:22:37.402 { 00:22:37.402 "dma_device_id": "system", 00:22:37.402 "dma_device_type": 1 00:22:37.402 }, 00:22:37.402 { 00:22:37.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.402 "dma_device_type": 2 00:22:37.403 } 00:22:37.403 ], 00:22:37.403 "driver_specific": {} 00:22:37.403 } 00:22:37.403 ] 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.403 "name": "Existed_Raid", 00:22:37.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.403 "strip_size_kb": 64, 00:22:37.403 "state": "configuring", 00:22:37.403 "raid_level": "raid5f", 00:22:37.403 "superblock": false, 00:22:37.403 "num_base_bdevs": 4, 00:22:37.403 "num_base_bdevs_discovered": 3, 00:22:37.403 "num_base_bdevs_operational": 4, 00:22:37.403 "base_bdevs_list": [ 00:22:37.403 { 00:22:37.403 "name": "BaseBdev1", 00:22:37.403 "uuid": "da25763d-0701-418f-994f-bd54a3cb043a", 00:22:37.403 "is_configured": true, 00:22:37.403 "data_offset": 0, 00:22:37.403 "data_size": 65536 00:22:37.403 }, 00:22:37.403 { 00:22:37.403 "name": null, 00:22:37.403 "uuid": "12e91b76-3034-40f6-ac85-562e4ffb7916", 00:22:37.403 "is_configured": false, 00:22:37.403 "data_offset": 0, 00:22:37.403 "data_size": 65536 00:22:37.403 }, 00:22:37.403 { 00:22:37.403 "name": "BaseBdev3", 00:22:37.403 "uuid": "a30a6bff-64dc-4af8-955f-d35076914de9", 00:22:37.403 "is_configured": true, 00:22:37.403 "data_offset": 0, 00:22:37.403 "data_size": 65536 00:22:37.403 }, 00:22:37.403 { 00:22:37.403 "name": "BaseBdev4", 00:22:37.403 "uuid": "af9ab167-4fce-4526-9a17-bfd443c3716c", 00:22:37.403 "is_configured": true, 00:22:37.403 "data_offset": 0, 00:22:37.403 "data_size": 65536 00:22:37.403 } 00:22:37.403 ] 00:22:37.403 }' 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.403 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.972 [2024-10-15 09:22:21.783839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.972 "name": "Existed_Raid", 00:22:37.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.972 "strip_size_kb": 64, 00:22:37.972 "state": "configuring", 00:22:37.972 "raid_level": "raid5f", 00:22:37.972 "superblock": false, 00:22:37.972 "num_base_bdevs": 4, 00:22:37.972 "num_base_bdevs_discovered": 2, 00:22:37.972 "num_base_bdevs_operational": 4, 00:22:37.972 "base_bdevs_list": [ 00:22:37.972 { 00:22:37.972 "name": "BaseBdev1", 00:22:37.972 "uuid": "da25763d-0701-418f-994f-bd54a3cb043a", 00:22:37.972 "is_configured": true, 00:22:37.972 "data_offset": 0, 00:22:37.972 "data_size": 65536 00:22:37.972 }, 00:22:37.972 { 00:22:37.972 "name": null, 00:22:37.972 "uuid": "12e91b76-3034-40f6-ac85-562e4ffb7916", 00:22:37.972 "is_configured": false, 00:22:37.972 "data_offset": 0, 00:22:37.972 "data_size": 65536 00:22:37.972 }, 00:22:37.972 { 00:22:37.972 "name": null, 00:22:37.972 "uuid": "a30a6bff-64dc-4af8-955f-d35076914de9", 00:22:37.972 "is_configured": false, 00:22:37.972 "data_offset": 0, 00:22:37.972 "data_size": 65536 00:22:37.972 }, 00:22:37.972 { 00:22:37.972 "name": "BaseBdev4", 00:22:37.972 "uuid": "af9ab167-4fce-4526-9a17-bfd443c3716c", 00:22:37.972 "is_configured": true, 00:22:37.972 "data_offset": 0, 00:22:37.972 "data_size": 65536 00:22:37.972 } 00:22:37.972 ] 00:22:37.972 }' 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.972 09:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.540 [2024-10-15 09:22:22.380139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.540 "name": "Existed_Raid", 00:22:38.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.540 "strip_size_kb": 64, 00:22:38.540 "state": "configuring", 00:22:38.540 "raid_level": "raid5f", 00:22:38.540 "superblock": false, 00:22:38.540 "num_base_bdevs": 4, 00:22:38.540 "num_base_bdevs_discovered": 3, 00:22:38.540 "num_base_bdevs_operational": 4, 00:22:38.540 "base_bdevs_list": [ 00:22:38.540 { 00:22:38.540 "name": "BaseBdev1", 00:22:38.540 "uuid": "da25763d-0701-418f-994f-bd54a3cb043a", 00:22:38.540 "is_configured": true, 00:22:38.540 "data_offset": 0, 00:22:38.540 "data_size": 65536 00:22:38.540 }, 00:22:38.540 { 00:22:38.540 "name": null, 00:22:38.540 "uuid": "12e91b76-3034-40f6-ac85-562e4ffb7916", 00:22:38.540 "is_configured": false, 00:22:38.540 "data_offset": 0, 00:22:38.540 "data_size": 65536 00:22:38.540 }, 00:22:38.540 { 00:22:38.540 "name": "BaseBdev3", 00:22:38.540 "uuid": "a30a6bff-64dc-4af8-955f-d35076914de9", 00:22:38.540 "is_configured": true, 00:22:38.540 "data_offset": 0, 00:22:38.540 "data_size": 65536 00:22:38.540 }, 00:22:38.540 { 00:22:38.540 "name": "BaseBdev4", 00:22:38.540 "uuid": "af9ab167-4fce-4526-9a17-bfd443c3716c", 00:22:38.540 "is_configured": true, 00:22:38.540 "data_offset": 0, 00:22:38.540 "data_size": 65536 00:22:38.540 } 00:22:38.540 ] 00:22:38.540 }' 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.540 09:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.109 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.109 09:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.109 09:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.109 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:39.109 09:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.109 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:39.109 09:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:39.109 09:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.109 09:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.109 [2024-10-15 09:22:22.968360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:39.367 09:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.368 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:39.368 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:39.368 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:39.368 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:39.368 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:39.368 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:39.368 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:39.368 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:39.368 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:39.368 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:39.368 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.368 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:39.368 09:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.368 09:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.368 09:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.368 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:39.368 "name": "Existed_Raid", 00:22:39.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.368 "strip_size_kb": 64, 00:22:39.368 "state": "configuring", 00:22:39.368 "raid_level": "raid5f", 00:22:39.368 "superblock": false, 00:22:39.368 "num_base_bdevs": 4, 00:22:39.368 "num_base_bdevs_discovered": 2, 00:22:39.368 "num_base_bdevs_operational": 4, 00:22:39.368 "base_bdevs_list": [ 00:22:39.368 { 00:22:39.368 "name": null, 00:22:39.368 "uuid": "da25763d-0701-418f-994f-bd54a3cb043a", 00:22:39.368 "is_configured": false, 00:22:39.368 "data_offset": 0, 00:22:39.368 "data_size": 65536 00:22:39.368 }, 00:22:39.368 { 00:22:39.368 "name": null, 00:22:39.368 "uuid": "12e91b76-3034-40f6-ac85-562e4ffb7916", 00:22:39.368 "is_configured": false, 00:22:39.368 "data_offset": 0, 00:22:39.368 "data_size": 65536 00:22:39.368 }, 00:22:39.368 { 00:22:39.368 "name": "BaseBdev3", 00:22:39.368 "uuid": "a30a6bff-64dc-4af8-955f-d35076914de9", 00:22:39.368 "is_configured": true, 00:22:39.368 "data_offset": 0, 00:22:39.368 "data_size": 65536 00:22:39.368 }, 00:22:39.368 { 00:22:39.368 "name": "BaseBdev4", 00:22:39.368 "uuid": "af9ab167-4fce-4526-9a17-bfd443c3716c", 00:22:39.368 "is_configured": true, 00:22:39.368 "data_offset": 0, 00:22:39.368 "data_size": 65536 00:22:39.368 } 00:22:39.368 ] 00:22:39.368 }' 00:22:39.368 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:39.368 09:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.936 [2024-10-15 09:22:23.667937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:39.936 "name": "Existed_Raid", 00:22:39.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.936 "strip_size_kb": 64, 00:22:39.936 "state": "configuring", 00:22:39.936 "raid_level": "raid5f", 00:22:39.936 "superblock": false, 00:22:39.936 "num_base_bdevs": 4, 00:22:39.936 "num_base_bdevs_discovered": 3, 00:22:39.936 "num_base_bdevs_operational": 4, 00:22:39.936 "base_bdevs_list": [ 00:22:39.936 { 00:22:39.936 "name": null, 00:22:39.936 "uuid": "da25763d-0701-418f-994f-bd54a3cb043a", 00:22:39.936 "is_configured": false, 00:22:39.936 "data_offset": 0, 00:22:39.936 "data_size": 65536 00:22:39.936 }, 00:22:39.936 { 00:22:39.936 "name": "BaseBdev2", 00:22:39.936 "uuid": "12e91b76-3034-40f6-ac85-562e4ffb7916", 00:22:39.936 "is_configured": true, 00:22:39.936 "data_offset": 0, 00:22:39.936 "data_size": 65536 00:22:39.936 }, 00:22:39.936 { 00:22:39.936 "name": "BaseBdev3", 00:22:39.936 "uuid": "a30a6bff-64dc-4af8-955f-d35076914de9", 00:22:39.936 "is_configured": true, 00:22:39.936 "data_offset": 0, 00:22:39.936 "data_size": 65536 00:22:39.936 }, 00:22:39.936 { 00:22:39.936 "name": "BaseBdev4", 00:22:39.936 "uuid": "af9ab167-4fce-4526-9a17-bfd443c3716c", 00:22:39.936 "is_configured": true, 00:22:39.936 "data_offset": 0, 00:22:39.936 "data_size": 65536 00:22:39.936 } 00:22:39.936 ] 00:22:39.936 }' 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:39.936 09:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u da25763d-0701-418f-994f-bd54a3cb043a 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.542 [2024-10-15 09:22:24.336673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:40.542 [2024-10-15 09:22:24.336969] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:40.542 [2024-10-15 09:22:24.336993] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:40.542 [2024-10-15 09:22:24.337362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:40.542 [2024-10-15 09:22:24.344257] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:40.542 [2024-10-15 09:22:24.344403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:40.542 [2024-10-15 09:22:24.344878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:40.542 NewBaseBdev 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.542 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.542 [ 00:22:40.542 { 00:22:40.542 "name": "NewBaseBdev", 00:22:40.542 "aliases": [ 00:22:40.542 "da25763d-0701-418f-994f-bd54a3cb043a" 00:22:40.542 ], 00:22:40.542 "product_name": "Malloc disk", 00:22:40.542 "block_size": 512, 00:22:40.542 "num_blocks": 65536, 00:22:40.542 "uuid": "da25763d-0701-418f-994f-bd54a3cb043a", 00:22:40.542 "assigned_rate_limits": { 00:22:40.542 "rw_ios_per_sec": 0, 00:22:40.542 "rw_mbytes_per_sec": 0, 00:22:40.542 "r_mbytes_per_sec": 0, 00:22:40.542 "w_mbytes_per_sec": 0 00:22:40.542 }, 00:22:40.542 "claimed": true, 00:22:40.542 "claim_type": "exclusive_write", 00:22:40.542 "zoned": false, 00:22:40.542 "supported_io_types": { 00:22:40.542 "read": true, 00:22:40.542 "write": true, 00:22:40.542 "unmap": true, 00:22:40.542 "flush": true, 00:22:40.542 "reset": true, 00:22:40.542 "nvme_admin": false, 00:22:40.542 "nvme_io": false, 00:22:40.542 "nvme_io_md": false, 00:22:40.542 "write_zeroes": true, 00:22:40.542 "zcopy": true, 00:22:40.542 "get_zone_info": false, 00:22:40.542 "zone_management": false, 00:22:40.542 "zone_append": false, 00:22:40.542 "compare": false, 00:22:40.542 "compare_and_write": false, 00:22:40.542 "abort": true, 00:22:40.542 "seek_hole": false, 00:22:40.542 "seek_data": false, 00:22:40.542 "copy": true, 00:22:40.542 "nvme_iov_md": false 00:22:40.542 }, 00:22:40.542 "memory_domains": [ 00:22:40.542 { 00:22:40.542 "dma_device_id": "system", 00:22:40.542 "dma_device_type": 1 00:22:40.542 }, 00:22:40.542 { 00:22:40.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:40.542 "dma_device_type": 2 00:22:40.542 } 00:22:40.542 ], 00:22:40.542 "driver_specific": {} 00:22:40.542 } 00:22:40.542 ] 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:40.543 "name": "Existed_Raid", 00:22:40.543 "uuid": "99b46949-647b-4b52-bdb0-b0802132e597", 00:22:40.543 "strip_size_kb": 64, 00:22:40.543 "state": "online", 00:22:40.543 "raid_level": "raid5f", 00:22:40.543 "superblock": false, 00:22:40.543 "num_base_bdevs": 4, 00:22:40.543 "num_base_bdevs_discovered": 4, 00:22:40.543 "num_base_bdevs_operational": 4, 00:22:40.543 "base_bdevs_list": [ 00:22:40.543 { 00:22:40.543 "name": "NewBaseBdev", 00:22:40.543 "uuid": "da25763d-0701-418f-994f-bd54a3cb043a", 00:22:40.543 "is_configured": true, 00:22:40.543 "data_offset": 0, 00:22:40.543 "data_size": 65536 00:22:40.543 }, 00:22:40.543 { 00:22:40.543 "name": "BaseBdev2", 00:22:40.543 "uuid": "12e91b76-3034-40f6-ac85-562e4ffb7916", 00:22:40.543 "is_configured": true, 00:22:40.543 "data_offset": 0, 00:22:40.543 "data_size": 65536 00:22:40.543 }, 00:22:40.543 { 00:22:40.543 "name": "BaseBdev3", 00:22:40.543 "uuid": "a30a6bff-64dc-4af8-955f-d35076914de9", 00:22:40.543 "is_configured": true, 00:22:40.543 "data_offset": 0, 00:22:40.543 "data_size": 65536 00:22:40.543 }, 00:22:40.543 { 00:22:40.543 "name": "BaseBdev4", 00:22:40.543 "uuid": "af9ab167-4fce-4526-9a17-bfd443c3716c", 00:22:40.543 "is_configured": true, 00:22:40.543 "data_offset": 0, 00:22:40.543 "data_size": 65536 00:22:40.543 } 00:22:40.543 ] 00:22:40.543 }' 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:40.543 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.112 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:41.112 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:41.112 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:41.112 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:41.112 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:41.112 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:41.112 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:41.112 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:41.112 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.112 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.112 [2024-10-15 09:22:24.925504] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:41.112 09:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.112 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:41.112 "name": "Existed_Raid", 00:22:41.112 "aliases": [ 00:22:41.112 "99b46949-647b-4b52-bdb0-b0802132e597" 00:22:41.112 ], 00:22:41.112 "product_name": "Raid Volume", 00:22:41.112 "block_size": 512, 00:22:41.112 "num_blocks": 196608, 00:22:41.112 "uuid": "99b46949-647b-4b52-bdb0-b0802132e597", 00:22:41.112 "assigned_rate_limits": { 00:22:41.112 "rw_ios_per_sec": 0, 00:22:41.112 "rw_mbytes_per_sec": 0, 00:22:41.112 "r_mbytes_per_sec": 0, 00:22:41.112 "w_mbytes_per_sec": 0 00:22:41.112 }, 00:22:41.112 "claimed": false, 00:22:41.112 "zoned": false, 00:22:41.112 "supported_io_types": { 00:22:41.112 "read": true, 00:22:41.112 "write": true, 00:22:41.112 "unmap": false, 00:22:41.112 "flush": false, 00:22:41.112 "reset": true, 00:22:41.112 "nvme_admin": false, 00:22:41.112 "nvme_io": false, 00:22:41.112 "nvme_io_md": false, 00:22:41.112 "write_zeroes": true, 00:22:41.112 "zcopy": false, 00:22:41.112 "get_zone_info": false, 00:22:41.112 "zone_management": false, 00:22:41.112 "zone_append": false, 00:22:41.112 "compare": false, 00:22:41.112 "compare_and_write": false, 00:22:41.112 "abort": false, 00:22:41.112 "seek_hole": false, 00:22:41.112 "seek_data": false, 00:22:41.112 "copy": false, 00:22:41.112 "nvme_iov_md": false 00:22:41.112 }, 00:22:41.112 "driver_specific": { 00:22:41.112 "raid": { 00:22:41.112 "uuid": "99b46949-647b-4b52-bdb0-b0802132e597", 00:22:41.112 "strip_size_kb": 64, 00:22:41.112 "state": "online", 00:22:41.112 "raid_level": "raid5f", 00:22:41.112 "superblock": false, 00:22:41.112 "num_base_bdevs": 4, 00:22:41.112 "num_base_bdevs_discovered": 4, 00:22:41.112 "num_base_bdevs_operational": 4, 00:22:41.112 "base_bdevs_list": [ 00:22:41.112 { 00:22:41.112 "name": "NewBaseBdev", 00:22:41.112 "uuid": "da25763d-0701-418f-994f-bd54a3cb043a", 00:22:41.112 "is_configured": true, 00:22:41.112 "data_offset": 0, 00:22:41.112 "data_size": 65536 00:22:41.112 }, 00:22:41.112 { 00:22:41.112 "name": "BaseBdev2", 00:22:41.112 "uuid": "12e91b76-3034-40f6-ac85-562e4ffb7916", 00:22:41.112 "is_configured": true, 00:22:41.112 "data_offset": 0, 00:22:41.112 "data_size": 65536 00:22:41.112 }, 00:22:41.112 { 00:22:41.112 "name": "BaseBdev3", 00:22:41.112 "uuid": "a30a6bff-64dc-4af8-955f-d35076914de9", 00:22:41.112 "is_configured": true, 00:22:41.112 "data_offset": 0, 00:22:41.112 "data_size": 65536 00:22:41.112 }, 00:22:41.112 { 00:22:41.112 "name": "BaseBdev4", 00:22:41.112 "uuid": "af9ab167-4fce-4526-9a17-bfd443c3716c", 00:22:41.112 "is_configured": true, 00:22:41.112 "data_offset": 0, 00:22:41.112 "data_size": 65536 00:22:41.112 } 00:22:41.112 ] 00:22:41.112 } 00:22:41.112 } 00:22:41.112 }' 00:22:41.112 09:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:41.112 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:41.112 BaseBdev2 00:22:41.112 BaseBdev3 00:22:41.112 BaseBdev4' 00:22:41.112 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.372 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.631 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:41.631 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:41.631 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:41.631 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.631 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.631 [2024-10-15 09:22:25.305282] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:41.631 [2024-10-15 09:22:25.305441] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:41.631 [2024-10-15 09:22:25.305662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:41.631 [2024-10-15 09:22:25.306222] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:41.631 [2024-10-15 09:22:25.306386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:41.631 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.631 09:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83465 00:22:41.631 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 83465 ']' 00:22:41.631 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 83465 00:22:41.631 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:22:41.631 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:41.631 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83465 00:22:41.631 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:41.631 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:41.631 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83465' 00:22:41.631 killing process with pid 83465 00:22:41.631 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 83465 00:22:41.631 [2024-10-15 09:22:25.345570] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:41.631 09:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 83465 00:22:41.890 [2024-10-15 09:22:25.756556] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:43.269 09:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:22:43.269 00:22:43.269 real 0m13.413s 00:22:43.269 user 0m21.952s 00:22:43.269 sys 0m1.988s 00:22:43.269 09:22:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:43.269 ************************************ 00:22:43.269 END TEST raid5f_state_function_test 00:22:43.269 09:22:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.269 ************************************ 00:22:43.269 09:22:27 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:22:43.269 09:22:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:43.269 09:22:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:43.269 09:22:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:43.269 ************************************ 00:22:43.269 START TEST raid5f_state_function_test_sb 00:22:43.269 ************************************ 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:43.269 Process raid pid: 84148 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84148 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84148' 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84148 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84148 ']' 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:43.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:43.269 09:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.269 [2024-10-15 09:22:27.147732] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:22:43.269 [2024-10-15 09:22:27.147932] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.528 [2024-10-15 09:22:27.329796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.787 [2024-10-15 09:22:27.510865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.045 [2024-10-15 09:22:27.745479] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:44.045 [2024-10-15 09:22:27.745541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.305 [2024-10-15 09:22:28.122339] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:44.305 [2024-10-15 09:22:28.122546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:44.305 [2024-10-15 09:22:28.122667] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:44.305 [2024-10-15 09:22:28.122727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:44.305 [2024-10-15 09:22:28.122863] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:44.305 [2024-10-15 09:22:28.122931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:44.305 [2024-10-15 09:22:28.123038] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:44.305 [2024-10-15 09:22:28.123108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.305 "name": "Existed_Raid", 00:22:44.305 "uuid": "feae0e2f-4dad-4368-9f70-68aaa784e103", 00:22:44.305 "strip_size_kb": 64, 00:22:44.305 "state": "configuring", 00:22:44.305 "raid_level": "raid5f", 00:22:44.305 "superblock": true, 00:22:44.305 "num_base_bdevs": 4, 00:22:44.305 "num_base_bdevs_discovered": 0, 00:22:44.305 "num_base_bdevs_operational": 4, 00:22:44.305 "base_bdevs_list": [ 00:22:44.305 { 00:22:44.305 "name": "BaseBdev1", 00:22:44.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.305 "is_configured": false, 00:22:44.305 "data_offset": 0, 00:22:44.305 "data_size": 0 00:22:44.305 }, 00:22:44.305 { 00:22:44.305 "name": "BaseBdev2", 00:22:44.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.305 "is_configured": false, 00:22:44.305 "data_offset": 0, 00:22:44.305 "data_size": 0 00:22:44.305 }, 00:22:44.305 { 00:22:44.305 "name": "BaseBdev3", 00:22:44.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.305 "is_configured": false, 00:22:44.305 "data_offset": 0, 00:22:44.305 "data_size": 0 00:22:44.305 }, 00:22:44.305 { 00:22:44.305 "name": "BaseBdev4", 00:22:44.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.305 "is_configured": false, 00:22:44.305 "data_offset": 0, 00:22:44.305 "data_size": 0 00:22:44.305 } 00:22:44.305 ] 00:22:44.305 }' 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.305 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.872 [2024-10-15 09:22:28.642449] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:44.872 [2024-10-15 09:22:28.642504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.872 [2024-10-15 09:22:28.650465] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:44.872 [2024-10-15 09:22:28.650652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:44.872 [2024-10-15 09:22:28.650680] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:44.872 [2024-10-15 09:22:28.650699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:44.872 [2024-10-15 09:22:28.650710] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:44.872 [2024-10-15 09:22:28.650725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:44.872 [2024-10-15 09:22:28.650735] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:44.872 [2024-10-15 09:22:28.650749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.872 [2024-10-15 09:22:28.704688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:44.872 BaseBdev1 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.872 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.872 [ 00:22:44.872 { 00:22:44.872 "name": "BaseBdev1", 00:22:44.872 "aliases": [ 00:22:44.872 "badd9dc6-3e55-400d-a98a-1e5689324421" 00:22:44.872 ], 00:22:44.872 "product_name": "Malloc disk", 00:22:44.872 "block_size": 512, 00:22:44.872 "num_blocks": 65536, 00:22:44.872 "uuid": "badd9dc6-3e55-400d-a98a-1e5689324421", 00:22:44.872 "assigned_rate_limits": { 00:22:44.872 "rw_ios_per_sec": 0, 00:22:44.872 "rw_mbytes_per_sec": 0, 00:22:44.872 "r_mbytes_per_sec": 0, 00:22:44.872 "w_mbytes_per_sec": 0 00:22:44.872 }, 00:22:44.872 "claimed": true, 00:22:44.872 "claim_type": "exclusive_write", 00:22:44.872 "zoned": false, 00:22:44.872 "supported_io_types": { 00:22:44.872 "read": true, 00:22:44.872 "write": true, 00:22:44.872 "unmap": true, 00:22:44.872 "flush": true, 00:22:44.872 "reset": true, 00:22:44.872 "nvme_admin": false, 00:22:44.872 "nvme_io": false, 00:22:44.872 "nvme_io_md": false, 00:22:44.872 "write_zeroes": true, 00:22:44.873 "zcopy": true, 00:22:44.873 "get_zone_info": false, 00:22:44.873 "zone_management": false, 00:22:44.873 "zone_append": false, 00:22:44.873 "compare": false, 00:22:44.873 "compare_and_write": false, 00:22:44.873 "abort": true, 00:22:44.873 "seek_hole": false, 00:22:44.873 "seek_data": false, 00:22:44.873 "copy": true, 00:22:44.873 "nvme_iov_md": false 00:22:44.873 }, 00:22:44.873 "memory_domains": [ 00:22:44.873 { 00:22:44.873 "dma_device_id": "system", 00:22:44.873 "dma_device_type": 1 00:22:44.873 }, 00:22:44.873 { 00:22:44.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.873 "dma_device_type": 2 00:22:44.873 } 00:22:44.873 ], 00:22:44.873 "driver_specific": {} 00:22:44.873 } 00:22:44.873 ] 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.873 "name": "Existed_Raid", 00:22:44.873 "uuid": "77e2bda8-335c-4e2a-8a3c-e448ec0a5b29", 00:22:44.873 "strip_size_kb": 64, 00:22:44.873 "state": "configuring", 00:22:44.873 "raid_level": "raid5f", 00:22:44.873 "superblock": true, 00:22:44.873 "num_base_bdevs": 4, 00:22:44.873 "num_base_bdevs_discovered": 1, 00:22:44.873 "num_base_bdevs_operational": 4, 00:22:44.873 "base_bdevs_list": [ 00:22:44.873 { 00:22:44.873 "name": "BaseBdev1", 00:22:44.873 "uuid": "badd9dc6-3e55-400d-a98a-1e5689324421", 00:22:44.873 "is_configured": true, 00:22:44.873 "data_offset": 2048, 00:22:44.873 "data_size": 63488 00:22:44.873 }, 00:22:44.873 { 00:22:44.873 "name": "BaseBdev2", 00:22:44.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.873 "is_configured": false, 00:22:44.873 "data_offset": 0, 00:22:44.873 "data_size": 0 00:22:44.873 }, 00:22:44.873 { 00:22:44.873 "name": "BaseBdev3", 00:22:44.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.873 "is_configured": false, 00:22:44.873 "data_offset": 0, 00:22:44.873 "data_size": 0 00:22:44.873 }, 00:22:44.873 { 00:22:44.873 "name": "BaseBdev4", 00:22:44.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.873 "is_configured": false, 00:22:44.873 "data_offset": 0, 00:22:44.873 "data_size": 0 00:22:44.873 } 00:22:44.873 ] 00:22:44.873 }' 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.873 09:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.438 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:45.438 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.438 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.438 [2024-10-15 09:22:29.245023] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:45.438 [2024-10-15 09:22:29.245153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:45.438 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.439 [2024-10-15 09:22:29.252998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:45.439 [2024-10-15 09:22:29.255721] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:45.439 [2024-10-15 09:22:29.255893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:45.439 [2024-10-15 09:22:29.256013] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:45.439 [2024-10-15 09:22:29.256074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:45.439 [2024-10-15 09:22:29.256224] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:45.439 [2024-10-15 09:22:29.256287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:45.439 "name": "Existed_Raid", 00:22:45.439 "uuid": "b99e1246-f39f-44c9-83c7-9ffb10003987", 00:22:45.439 "strip_size_kb": 64, 00:22:45.439 "state": "configuring", 00:22:45.439 "raid_level": "raid5f", 00:22:45.439 "superblock": true, 00:22:45.439 "num_base_bdevs": 4, 00:22:45.439 "num_base_bdevs_discovered": 1, 00:22:45.439 "num_base_bdevs_operational": 4, 00:22:45.439 "base_bdevs_list": [ 00:22:45.439 { 00:22:45.439 "name": "BaseBdev1", 00:22:45.439 "uuid": "badd9dc6-3e55-400d-a98a-1e5689324421", 00:22:45.439 "is_configured": true, 00:22:45.439 "data_offset": 2048, 00:22:45.439 "data_size": 63488 00:22:45.439 }, 00:22:45.439 { 00:22:45.439 "name": "BaseBdev2", 00:22:45.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.439 "is_configured": false, 00:22:45.439 "data_offset": 0, 00:22:45.439 "data_size": 0 00:22:45.439 }, 00:22:45.439 { 00:22:45.439 "name": "BaseBdev3", 00:22:45.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.439 "is_configured": false, 00:22:45.439 "data_offset": 0, 00:22:45.439 "data_size": 0 00:22:45.439 }, 00:22:45.439 { 00:22:45.439 "name": "BaseBdev4", 00:22:45.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.439 "is_configured": false, 00:22:45.439 "data_offset": 0, 00:22:45.439 "data_size": 0 00:22:45.439 } 00:22:45.439 ] 00:22:45.439 }' 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:45.439 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.006 [2024-10-15 09:22:29.829774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:46.006 BaseBdev2 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.006 [ 00:22:46.006 { 00:22:46.006 "name": "BaseBdev2", 00:22:46.006 "aliases": [ 00:22:46.006 "a91ce5c8-8e5a-4ebe-b711-5ade23634655" 00:22:46.006 ], 00:22:46.006 "product_name": "Malloc disk", 00:22:46.006 "block_size": 512, 00:22:46.006 "num_blocks": 65536, 00:22:46.006 "uuid": "a91ce5c8-8e5a-4ebe-b711-5ade23634655", 00:22:46.006 "assigned_rate_limits": { 00:22:46.006 "rw_ios_per_sec": 0, 00:22:46.006 "rw_mbytes_per_sec": 0, 00:22:46.006 "r_mbytes_per_sec": 0, 00:22:46.006 "w_mbytes_per_sec": 0 00:22:46.006 }, 00:22:46.006 "claimed": true, 00:22:46.006 "claim_type": "exclusive_write", 00:22:46.006 "zoned": false, 00:22:46.006 "supported_io_types": { 00:22:46.006 "read": true, 00:22:46.006 "write": true, 00:22:46.006 "unmap": true, 00:22:46.006 "flush": true, 00:22:46.006 "reset": true, 00:22:46.006 "nvme_admin": false, 00:22:46.006 "nvme_io": false, 00:22:46.006 "nvme_io_md": false, 00:22:46.006 "write_zeroes": true, 00:22:46.006 "zcopy": true, 00:22:46.006 "get_zone_info": false, 00:22:46.006 "zone_management": false, 00:22:46.006 "zone_append": false, 00:22:46.006 "compare": false, 00:22:46.006 "compare_and_write": false, 00:22:46.006 "abort": true, 00:22:46.006 "seek_hole": false, 00:22:46.006 "seek_data": false, 00:22:46.006 "copy": true, 00:22:46.006 "nvme_iov_md": false 00:22:46.006 }, 00:22:46.006 "memory_domains": [ 00:22:46.006 { 00:22:46.006 "dma_device_id": "system", 00:22:46.006 "dma_device_type": 1 00:22:46.006 }, 00:22:46.006 { 00:22:46.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.006 "dma_device_type": 2 00:22:46.006 } 00:22:46.006 ], 00:22:46.006 "driver_specific": {} 00:22:46.006 } 00:22:46.006 ] 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.006 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:46.006 "name": "Existed_Raid", 00:22:46.006 "uuid": "b99e1246-f39f-44c9-83c7-9ffb10003987", 00:22:46.006 "strip_size_kb": 64, 00:22:46.006 "state": "configuring", 00:22:46.006 "raid_level": "raid5f", 00:22:46.006 "superblock": true, 00:22:46.006 "num_base_bdevs": 4, 00:22:46.006 "num_base_bdevs_discovered": 2, 00:22:46.006 "num_base_bdevs_operational": 4, 00:22:46.006 "base_bdevs_list": [ 00:22:46.006 { 00:22:46.006 "name": "BaseBdev1", 00:22:46.006 "uuid": "badd9dc6-3e55-400d-a98a-1e5689324421", 00:22:46.006 "is_configured": true, 00:22:46.006 "data_offset": 2048, 00:22:46.006 "data_size": 63488 00:22:46.006 }, 00:22:46.006 { 00:22:46.006 "name": "BaseBdev2", 00:22:46.006 "uuid": "a91ce5c8-8e5a-4ebe-b711-5ade23634655", 00:22:46.006 "is_configured": true, 00:22:46.006 "data_offset": 2048, 00:22:46.006 "data_size": 63488 00:22:46.006 }, 00:22:46.006 { 00:22:46.006 "name": "BaseBdev3", 00:22:46.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.006 "is_configured": false, 00:22:46.007 "data_offset": 0, 00:22:46.007 "data_size": 0 00:22:46.007 }, 00:22:46.007 { 00:22:46.007 "name": "BaseBdev4", 00:22:46.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.007 "is_configured": false, 00:22:46.007 "data_offset": 0, 00:22:46.007 "data_size": 0 00:22:46.007 } 00:22:46.007 ] 00:22:46.007 }' 00:22:46.007 09:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:46.007 09:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.574 [2024-10-15 09:22:30.438266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:46.574 BaseBdev3 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.574 [ 00:22:46.574 { 00:22:46.574 "name": "BaseBdev3", 00:22:46.574 "aliases": [ 00:22:46.574 "4d3d889a-c58a-4949-9213-3a1e76ce6182" 00:22:46.574 ], 00:22:46.574 "product_name": "Malloc disk", 00:22:46.574 "block_size": 512, 00:22:46.574 "num_blocks": 65536, 00:22:46.574 "uuid": "4d3d889a-c58a-4949-9213-3a1e76ce6182", 00:22:46.574 "assigned_rate_limits": { 00:22:46.574 "rw_ios_per_sec": 0, 00:22:46.574 "rw_mbytes_per_sec": 0, 00:22:46.574 "r_mbytes_per_sec": 0, 00:22:46.574 "w_mbytes_per_sec": 0 00:22:46.574 }, 00:22:46.574 "claimed": true, 00:22:46.574 "claim_type": "exclusive_write", 00:22:46.574 "zoned": false, 00:22:46.574 "supported_io_types": { 00:22:46.574 "read": true, 00:22:46.574 "write": true, 00:22:46.574 "unmap": true, 00:22:46.574 "flush": true, 00:22:46.574 "reset": true, 00:22:46.574 "nvme_admin": false, 00:22:46.574 "nvme_io": false, 00:22:46.574 "nvme_io_md": false, 00:22:46.574 "write_zeroes": true, 00:22:46.574 "zcopy": true, 00:22:46.574 "get_zone_info": false, 00:22:46.574 "zone_management": false, 00:22:46.574 "zone_append": false, 00:22:46.574 "compare": false, 00:22:46.574 "compare_and_write": false, 00:22:46.574 "abort": true, 00:22:46.574 "seek_hole": false, 00:22:46.574 "seek_data": false, 00:22:46.574 "copy": true, 00:22:46.574 "nvme_iov_md": false 00:22:46.574 }, 00:22:46.574 "memory_domains": [ 00:22:46.574 { 00:22:46.574 "dma_device_id": "system", 00:22:46.574 "dma_device_type": 1 00:22:46.574 }, 00:22:46.574 { 00:22:46.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.574 "dma_device_type": 2 00:22:46.574 } 00:22:46.574 ], 00:22:46.574 "driver_specific": {} 00:22:46.574 } 00:22:46.574 ] 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.574 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.833 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:46.833 "name": "Existed_Raid", 00:22:46.833 "uuid": "b99e1246-f39f-44c9-83c7-9ffb10003987", 00:22:46.833 "strip_size_kb": 64, 00:22:46.833 "state": "configuring", 00:22:46.833 "raid_level": "raid5f", 00:22:46.833 "superblock": true, 00:22:46.833 "num_base_bdevs": 4, 00:22:46.833 "num_base_bdevs_discovered": 3, 00:22:46.833 "num_base_bdevs_operational": 4, 00:22:46.833 "base_bdevs_list": [ 00:22:46.833 { 00:22:46.833 "name": "BaseBdev1", 00:22:46.833 "uuid": "badd9dc6-3e55-400d-a98a-1e5689324421", 00:22:46.833 "is_configured": true, 00:22:46.833 "data_offset": 2048, 00:22:46.833 "data_size": 63488 00:22:46.833 }, 00:22:46.833 { 00:22:46.833 "name": "BaseBdev2", 00:22:46.833 "uuid": "a91ce5c8-8e5a-4ebe-b711-5ade23634655", 00:22:46.833 "is_configured": true, 00:22:46.833 "data_offset": 2048, 00:22:46.833 "data_size": 63488 00:22:46.833 }, 00:22:46.833 { 00:22:46.833 "name": "BaseBdev3", 00:22:46.833 "uuid": "4d3d889a-c58a-4949-9213-3a1e76ce6182", 00:22:46.833 "is_configured": true, 00:22:46.833 "data_offset": 2048, 00:22:46.833 "data_size": 63488 00:22:46.833 }, 00:22:46.833 { 00:22:46.833 "name": "BaseBdev4", 00:22:46.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.833 "is_configured": false, 00:22:46.833 "data_offset": 0, 00:22:46.833 "data_size": 0 00:22:46.833 } 00:22:46.833 ] 00:22:46.833 }' 00:22:46.833 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:46.833 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.121 09:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:47.121 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.121 09:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.121 [2024-10-15 09:22:31.021551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:47.121 BaseBdev4 00:22:47.121 [2024-10-15 09:22:31.022288] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:47.121 [2024-10-15 09:22:31.022324] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:47.121 [2024-10-15 09:22:31.022707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:47.121 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.121 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:22:47.121 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:22:47.121 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:47.121 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:47.121 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:47.121 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:47.121 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:22:47.121 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.121 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.121 [2024-10-15 09:22:31.029978] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:47.121 [2024-10-15 09:22:31.030192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:47.121 [2024-10-15 09:22:31.030674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:47.121 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.121 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:47.121 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.121 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.386 [ 00:22:47.386 { 00:22:47.386 "name": "BaseBdev4", 00:22:47.386 "aliases": [ 00:22:47.386 "d98d8491-735f-4ddb-bfa5-8f8bb94f68a5" 00:22:47.386 ], 00:22:47.386 "product_name": "Malloc disk", 00:22:47.386 "block_size": 512, 00:22:47.386 "num_blocks": 65536, 00:22:47.386 "uuid": "d98d8491-735f-4ddb-bfa5-8f8bb94f68a5", 00:22:47.386 "assigned_rate_limits": { 00:22:47.386 "rw_ios_per_sec": 0, 00:22:47.386 "rw_mbytes_per_sec": 0, 00:22:47.386 "r_mbytes_per_sec": 0, 00:22:47.386 "w_mbytes_per_sec": 0 00:22:47.386 }, 00:22:47.386 "claimed": true, 00:22:47.386 "claim_type": "exclusive_write", 00:22:47.386 "zoned": false, 00:22:47.386 "supported_io_types": { 00:22:47.386 "read": true, 00:22:47.386 "write": true, 00:22:47.386 "unmap": true, 00:22:47.386 "flush": true, 00:22:47.386 "reset": true, 00:22:47.386 "nvme_admin": false, 00:22:47.386 "nvme_io": false, 00:22:47.386 "nvme_io_md": false, 00:22:47.386 "write_zeroes": true, 00:22:47.386 "zcopy": true, 00:22:47.386 "get_zone_info": false, 00:22:47.386 "zone_management": false, 00:22:47.386 "zone_append": false, 00:22:47.386 "compare": false, 00:22:47.386 "compare_and_write": false, 00:22:47.386 "abort": true, 00:22:47.386 "seek_hole": false, 00:22:47.386 "seek_data": false, 00:22:47.386 "copy": true, 00:22:47.386 "nvme_iov_md": false 00:22:47.386 }, 00:22:47.386 "memory_domains": [ 00:22:47.386 { 00:22:47.386 "dma_device_id": "system", 00:22:47.386 "dma_device_type": 1 00:22:47.386 }, 00:22:47.386 { 00:22:47.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.386 "dma_device_type": 2 00:22:47.386 } 00:22:47.386 ], 00:22:47.386 "driver_specific": {} 00:22:47.386 } 00:22:47.386 ] 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.386 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:47.386 "name": "Existed_Raid", 00:22:47.386 "uuid": "b99e1246-f39f-44c9-83c7-9ffb10003987", 00:22:47.386 "strip_size_kb": 64, 00:22:47.386 "state": "online", 00:22:47.386 "raid_level": "raid5f", 00:22:47.386 "superblock": true, 00:22:47.386 "num_base_bdevs": 4, 00:22:47.387 "num_base_bdevs_discovered": 4, 00:22:47.387 "num_base_bdevs_operational": 4, 00:22:47.387 "base_bdevs_list": [ 00:22:47.387 { 00:22:47.387 "name": "BaseBdev1", 00:22:47.387 "uuid": "badd9dc6-3e55-400d-a98a-1e5689324421", 00:22:47.387 "is_configured": true, 00:22:47.387 "data_offset": 2048, 00:22:47.387 "data_size": 63488 00:22:47.387 }, 00:22:47.387 { 00:22:47.387 "name": "BaseBdev2", 00:22:47.387 "uuid": "a91ce5c8-8e5a-4ebe-b711-5ade23634655", 00:22:47.387 "is_configured": true, 00:22:47.387 "data_offset": 2048, 00:22:47.387 "data_size": 63488 00:22:47.387 }, 00:22:47.387 { 00:22:47.387 "name": "BaseBdev3", 00:22:47.387 "uuid": "4d3d889a-c58a-4949-9213-3a1e76ce6182", 00:22:47.387 "is_configured": true, 00:22:47.387 "data_offset": 2048, 00:22:47.387 "data_size": 63488 00:22:47.387 }, 00:22:47.387 { 00:22:47.387 "name": "BaseBdev4", 00:22:47.387 "uuid": "d98d8491-735f-4ddb-bfa5-8f8bb94f68a5", 00:22:47.387 "is_configured": true, 00:22:47.387 "data_offset": 2048, 00:22:47.387 "data_size": 63488 00:22:47.387 } 00:22:47.387 ] 00:22:47.387 }' 00:22:47.387 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:47.387 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.953 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:47.953 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:47.953 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:47.953 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:47.953 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:47.953 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:47.953 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:47.953 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:47.953 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.953 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.953 [2024-10-15 09:22:31.587310] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:47.953 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.953 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:47.953 "name": "Existed_Raid", 00:22:47.953 "aliases": [ 00:22:47.953 "b99e1246-f39f-44c9-83c7-9ffb10003987" 00:22:47.953 ], 00:22:47.953 "product_name": "Raid Volume", 00:22:47.953 "block_size": 512, 00:22:47.953 "num_blocks": 190464, 00:22:47.953 "uuid": "b99e1246-f39f-44c9-83c7-9ffb10003987", 00:22:47.953 "assigned_rate_limits": { 00:22:47.953 "rw_ios_per_sec": 0, 00:22:47.953 "rw_mbytes_per_sec": 0, 00:22:47.953 "r_mbytes_per_sec": 0, 00:22:47.953 "w_mbytes_per_sec": 0 00:22:47.953 }, 00:22:47.953 "claimed": false, 00:22:47.953 "zoned": false, 00:22:47.953 "supported_io_types": { 00:22:47.953 "read": true, 00:22:47.953 "write": true, 00:22:47.953 "unmap": false, 00:22:47.953 "flush": false, 00:22:47.953 "reset": true, 00:22:47.953 "nvme_admin": false, 00:22:47.953 "nvme_io": false, 00:22:47.953 "nvme_io_md": false, 00:22:47.953 "write_zeroes": true, 00:22:47.953 "zcopy": false, 00:22:47.953 "get_zone_info": false, 00:22:47.953 "zone_management": false, 00:22:47.953 "zone_append": false, 00:22:47.953 "compare": false, 00:22:47.953 "compare_and_write": false, 00:22:47.953 "abort": false, 00:22:47.953 "seek_hole": false, 00:22:47.953 "seek_data": false, 00:22:47.953 "copy": false, 00:22:47.953 "nvme_iov_md": false 00:22:47.953 }, 00:22:47.953 "driver_specific": { 00:22:47.953 "raid": { 00:22:47.953 "uuid": "b99e1246-f39f-44c9-83c7-9ffb10003987", 00:22:47.953 "strip_size_kb": 64, 00:22:47.953 "state": "online", 00:22:47.953 "raid_level": "raid5f", 00:22:47.953 "superblock": true, 00:22:47.953 "num_base_bdevs": 4, 00:22:47.953 "num_base_bdevs_discovered": 4, 00:22:47.953 "num_base_bdevs_operational": 4, 00:22:47.953 "base_bdevs_list": [ 00:22:47.953 { 00:22:47.953 "name": "BaseBdev1", 00:22:47.953 "uuid": "badd9dc6-3e55-400d-a98a-1e5689324421", 00:22:47.953 "is_configured": true, 00:22:47.953 "data_offset": 2048, 00:22:47.953 "data_size": 63488 00:22:47.953 }, 00:22:47.953 { 00:22:47.953 "name": "BaseBdev2", 00:22:47.953 "uuid": "a91ce5c8-8e5a-4ebe-b711-5ade23634655", 00:22:47.953 "is_configured": true, 00:22:47.953 "data_offset": 2048, 00:22:47.953 "data_size": 63488 00:22:47.953 }, 00:22:47.953 { 00:22:47.953 "name": "BaseBdev3", 00:22:47.953 "uuid": "4d3d889a-c58a-4949-9213-3a1e76ce6182", 00:22:47.953 "is_configured": true, 00:22:47.953 "data_offset": 2048, 00:22:47.953 "data_size": 63488 00:22:47.953 }, 00:22:47.953 { 00:22:47.953 "name": "BaseBdev4", 00:22:47.953 "uuid": "d98d8491-735f-4ddb-bfa5-8f8bb94f68a5", 00:22:47.953 "is_configured": true, 00:22:47.953 "data_offset": 2048, 00:22:47.953 "data_size": 63488 00:22:47.953 } 00:22:47.953 ] 00:22:47.953 } 00:22:47.953 } 00:22:47.953 }' 00:22:47.953 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:47.953 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:47.953 BaseBdev2 00:22:47.953 BaseBdev3 00:22:47.953 BaseBdev4' 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.954 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.212 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:48.212 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:48.212 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:48.212 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:48.212 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.212 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.212 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:48.212 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.212 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:48.212 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:48.212 09:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:48.212 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.212 09:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.212 [2024-10-15 09:22:31.959272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.212 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.212 "name": "Existed_Raid", 00:22:48.212 "uuid": "b99e1246-f39f-44c9-83c7-9ffb10003987", 00:22:48.212 "strip_size_kb": 64, 00:22:48.212 "state": "online", 00:22:48.212 "raid_level": "raid5f", 00:22:48.212 "superblock": true, 00:22:48.212 "num_base_bdevs": 4, 00:22:48.212 "num_base_bdevs_discovered": 3, 00:22:48.212 "num_base_bdevs_operational": 3, 00:22:48.212 "base_bdevs_list": [ 00:22:48.212 { 00:22:48.212 "name": null, 00:22:48.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.212 "is_configured": false, 00:22:48.212 "data_offset": 0, 00:22:48.212 "data_size": 63488 00:22:48.212 }, 00:22:48.212 { 00:22:48.212 "name": "BaseBdev2", 00:22:48.212 "uuid": "a91ce5c8-8e5a-4ebe-b711-5ade23634655", 00:22:48.212 "is_configured": true, 00:22:48.212 "data_offset": 2048, 00:22:48.213 "data_size": 63488 00:22:48.213 }, 00:22:48.213 { 00:22:48.213 "name": "BaseBdev3", 00:22:48.213 "uuid": "4d3d889a-c58a-4949-9213-3a1e76ce6182", 00:22:48.213 "is_configured": true, 00:22:48.213 "data_offset": 2048, 00:22:48.213 "data_size": 63488 00:22:48.213 }, 00:22:48.213 { 00:22:48.213 "name": "BaseBdev4", 00:22:48.213 "uuid": "d98d8491-735f-4ddb-bfa5-8f8bb94f68a5", 00:22:48.213 "is_configured": true, 00:22:48.213 "data_offset": 2048, 00:22:48.213 "data_size": 63488 00:22:48.213 } 00:22:48.213 ] 00:22:48.213 }' 00:22:48.213 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.213 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.780 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:48.780 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:48.780 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.780 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.780 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.780 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:48.780 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.780 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:48.780 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:48.780 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:48.780 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.780 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.780 [2024-10-15 09:22:32.641825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:48.780 [2024-10-15 09:22:32.642291] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:49.039 [2024-10-15 09:22:32.737609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.039 [2024-10-15 09:22:32.797751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.039 09:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.039 [2024-10-15 09:22:32.957796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:49.039 [2024-10-15 09:22:32.958045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.298 BaseBdev2 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.298 [ 00:22:49.298 { 00:22:49.298 "name": "BaseBdev2", 00:22:49.298 "aliases": [ 00:22:49.298 "0c286ecd-774c-4031-b824-32c828b7abae" 00:22:49.298 ], 00:22:49.298 "product_name": "Malloc disk", 00:22:49.298 "block_size": 512, 00:22:49.298 "num_blocks": 65536, 00:22:49.298 "uuid": "0c286ecd-774c-4031-b824-32c828b7abae", 00:22:49.298 "assigned_rate_limits": { 00:22:49.298 "rw_ios_per_sec": 0, 00:22:49.298 "rw_mbytes_per_sec": 0, 00:22:49.298 "r_mbytes_per_sec": 0, 00:22:49.298 "w_mbytes_per_sec": 0 00:22:49.298 }, 00:22:49.298 "claimed": false, 00:22:49.298 "zoned": false, 00:22:49.298 "supported_io_types": { 00:22:49.298 "read": true, 00:22:49.298 "write": true, 00:22:49.298 "unmap": true, 00:22:49.298 "flush": true, 00:22:49.298 "reset": true, 00:22:49.298 "nvme_admin": false, 00:22:49.298 "nvme_io": false, 00:22:49.298 "nvme_io_md": false, 00:22:49.298 "write_zeroes": true, 00:22:49.298 "zcopy": true, 00:22:49.298 "get_zone_info": false, 00:22:49.298 "zone_management": false, 00:22:49.298 "zone_append": false, 00:22:49.298 "compare": false, 00:22:49.298 "compare_and_write": false, 00:22:49.298 "abort": true, 00:22:49.298 "seek_hole": false, 00:22:49.298 "seek_data": false, 00:22:49.298 "copy": true, 00:22:49.298 "nvme_iov_md": false 00:22:49.298 }, 00:22:49.298 "memory_domains": [ 00:22:49.298 { 00:22:49.298 "dma_device_id": "system", 00:22:49.298 "dma_device_type": 1 00:22:49.298 }, 00:22:49.298 { 00:22:49.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.298 "dma_device_type": 2 00:22:49.298 } 00:22:49.298 ], 00:22:49.298 "driver_specific": {} 00:22:49.298 } 00:22:49.298 ] 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.298 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.557 BaseBdev3 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.557 [ 00:22:49.557 { 00:22:49.557 "name": "BaseBdev3", 00:22:49.557 "aliases": [ 00:22:49.557 "9d309b19-f968-48bb-8f47-63fbabd964f7" 00:22:49.557 ], 00:22:49.557 "product_name": "Malloc disk", 00:22:49.557 "block_size": 512, 00:22:49.557 "num_blocks": 65536, 00:22:49.557 "uuid": "9d309b19-f968-48bb-8f47-63fbabd964f7", 00:22:49.557 "assigned_rate_limits": { 00:22:49.557 "rw_ios_per_sec": 0, 00:22:49.557 "rw_mbytes_per_sec": 0, 00:22:49.557 "r_mbytes_per_sec": 0, 00:22:49.557 "w_mbytes_per_sec": 0 00:22:49.557 }, 00:22:49.557 "claimed": false, 00:22:49.557 "zoned": false, 00:22:49.557 "supported_io_types": { 00:22:49.557 "read": true, 00:22:49.557 "write": true, 00:22:49.557 "unmap": true, 00:22:49.557 "flush": true, 00:22:49.557 "reset": true, 00:22:49.557 "nvme_admin": false, 00:22:49.557 "nvme_io": false, 00:22:49.557 "nvme_io_md": false, 00:22:49.557 "write_zeroes": true, 00:22:49.557 "zcopy": true, 00:22:49.557 "get_zone_info": false, 00:22:49.557 "zone_management": false, 00:22:49.557 "zone_append": false, 00:22:49.557 "compare": false, 00:22:49.557 "compare_and_write": false, 00:22:49.557 "abort": true, 00:22:49.557 "seek_hole": false, 00:22:49.557 "seek_data": false, 00:22:49.557 "copy": true, 00:22:49.557 "nvme_iov_md": false 00:22:49.557 }, 00:22:49.557 "memory_domains": [ 00:22:49.557 { 00:22:49.557 "dma_device_id": "system", 00:22:49.557 "dma_device_type": 1 00:22:49.557 }, 00:22:49.557 { 00:22:49.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.557 "dma_device_type": 2 00:22:49.557 } 00:22:49.557 ], 00:22:49.557 "driver_specific": {} 00:22:49.557 } 00:22:49.557 ] 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.557 BaseBdev4 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.557 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.557 [ 00:22:49.557 { 00:22:49.557 "name": "BaseBdev4", 00:22:49.557 "aliases": [ 00:22:49.557 "04003be5-03e4-49fe-9a01-36f8fc2ce2af" 00:22:49.557 ], 00:22:49.557 "product_name": "Malloc disk", 00:22:49.557 "block_size": 512, 00:22:49.557 "num_blocks": 65536, 00:22:49.558 "uuid": "04003be5-03e4-49fe-9a01-36f8fc2ce2af", 00:22:49.558 "assigned_rate_limits": { 00:22:49.558 "rw_ios_per_sec": 0, 00:22:49.558 "rw_mbytes_per_sec": 0, 00:22:49.558 "r_mbytes_per_sec": 0, 00:22:49.558 "w_mbytes_per_sec": 0 00:22:49.558 }, 00:22:49.558 "claimed": false, 00:22:49.558 "zoned": false, 00:22:49.558 "supported_io_types": { 00:22:49.558 "read": true, 00:22:49.558 "write": true, 00:22:49.558 "unmap": true, 00:22:49.558 "flush": true, 00:22:49.558 "reset": true, 00:22:49.558 "nvme_admin": false, 00:22:49.558 "nvme_io": false, 00:22:49.558 "nvme_io_md": false, 00:22:49.558 "write_zeroes": true, 00:22:49.558 "zcopy": true, 00:22:49.558 "get_zone_info": false, 00:22:49.558 "zone_management": false, 00:22:49.558 "zone_append": false, 00:22:49.558 "compare": false, 00:22:49.558 "compare_and_write": false, 00:22:49.558 "abort": true, 00:22:49.558 "seek_hole": false, 00:22:49.558 "seek_data": false, 00:22:49.558 "copy": true, 00:22:49.558 "nvme_iov_md": false 00:22:49.558 }, 00:22:49.558 "memory_domains": [ 00:22:49.558 { 00:22:49.558 "dma_device_id": "system", 00:22:49.558 "dma_device_type": 1 00:22:49.558 }, 00:22:49.558 { 00:22:49.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.558 "dma_device_type": 2 00:22:49.558 } 00:22:49.558 ], 00:22:49.558 "driver_specific": {} 00:22:49.558 } 00:22:49.558 ] 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.558 [2024-10-15 09:22:33.341036] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:49.558 [2024-10-15 09:22:33.341237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:49.558 [2024-10-15 09:22:33.341387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:49.558 [2024-10-15 09:22:33.344155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:49.558 [2024-10-15 09:22:33.344368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:49.558 "name": "Existed_Raid", 00:22:49.558 "uuid": "4c89dfb5-d89d-423b-baf8-3195afdc4614", 00:22:49.558 "strip_size_kb": 64, 00:22:49.558 "state": "configuring", 00:22:49.558 "raid_level": "raid5f", 00:22:49.558 "superblock": true, 00:22:49.558 "num_base_bdevs": 4, 00:22:49.558 "num_base_bdevs_discovered": 3, 00:22:49.558 "num_base_bdevs_operational": 4, 00:22:49.558 "base_bdevs_list": [ 00:22:49.558 { 00:22:49.558 "name": "BaseBdev1", 00:22:49.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.558 "is_configured": false, 00:22:49.558 "data_offset": 0, 00:22:49.558 "data_size": 0 00:22:49.558 }, 00:22:49.558 { 00:22:49.558 "name": "BaseBdev2", 00:22:49.558 "uuid": "0c286ecd-774c-4031-b824-32c828b7abae", 00:22:49.558 "is_configured": true, 00:22:49.558 "data_offset": 2048, 00:22:49.558 "data_size": 63488 00:22:49.558 }, 00:22:49.558 { 00:22:49.558 "name": "BaseBdev3", 00:22:49.558 "uuid": "9d309b19-f968-48bb-8f47-63fbabd964f7", 00:22:49.558 "is_configured": true, 00:22:49.558 "data_offset": 2048, 00:22:49.558 "data_size": 63488 00:22:49.558 }, 00:22:49.558 { 00:22:49.558 "name": "BaseBdev4", 00:22:49.558 "uuid": "04003be5-03e4-49fe-9a01-36f8fc2ce2af", 00:22:49.558 "is_configured": true, 00:22:49.558 "data_offset": 2048, 00:22:49.558 "data_size": 63488 00:22:49.558 } 00:22:49.558 ] 00:22:49.558 }' 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:49.558 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.125 [2024-10-15 09:22:33.877200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:50.125 "name": "Existed_Raid", 00:22:50.125 "uuid": "4c89dfb5-d89d-423b-baf8-3195afdc4614", 00:22:50.125 "strip_size_kb": 64, 00:22:50.125 "state": "configuring", 00:22:50.125 "raid_level": "raid5f", 00:22:50.125 "superblock": true, 00:22:50.125 "num_base_bdevs": 4, 00:22:50.125 "num_base_bdevs_discovered": 2, 00:22:50.125 "num_base_bdevs_operational": 4, 00:22:50.125 "base_bdevs_list": [ 00:22:50.125 { 00:22:50.125 "name": "BaseBdev1", 00:22:50.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.125 "is_configured": false, 00:22:50.125 "data_offset": 0, 00:22:50.125 "data_size": 0 00:22:50.125 }, 00:22:50.125 { 00:22:50.125 "name": null, 00:22:50.125 "uuid": "0c286ecd-774c-4031-b824-32c828b7abae", 00:22:50.125 "is_configured": false, 00:22:50.125 "data_offset": 0, 00:22:50.125 "data_size": 63488 00:22:50.125 }, 00:22:50.125 { 00:22:50.125 "name": "BaseBdev3", 00:22:50.125 "uuid": "9d309b19-f968-48bb-8f47-63fbabd964f7", 00:22:50.125 "is_configured": true, 00:22:50.125 "data_offset": 2048, 00:22:50.125 "data_size": 63488 00:22:50.125 }, 00:22:50.125 { 00:22:50.125 "name": "BaseBdev4", 00:22:50.125 "uuid": "04003be5-03e4-49fe-9a01-36f8fc2ce2af", 00:22:50.125 "is_configured": true, 00:22:50.125 "data_offset": 2048, 00:22:50.125 "data_size": 63488 00:22:50.125 } 00:22:50.125 ] 00:22:50.125 }' 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:50.125 09:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.693 [2024-10-15 09:22:34.482666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:50.693 BaseBdev1 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.693 [ 00:22:50.693 { 00:22:50.693 "name": "BaseBdev1", 00:22:50.693 "aliases": [ 00:22:50.693 "6e6ca29e-0541-4b72-ac88-20b978768606" 00:22:50.693 ], 00:22:50.693 "product_name": "Malloc disk", 00:22:50.693 "block_size": 512, 00:22:50.693 "num_blocks": 65536, 00:22:50.693 "uuid": "6e6ca29e-0541-4b72-ac88-20b978768606", 00:22:50.693 "assigned_rate_limits": { 00:22:50.693 "rw_ios_per_sec": 0, 00:22:50.693 "rw_mbytes_per_sec": 0, 00:22:50.693 "r_mbytes_per_sec": 0, 00:22:50.693 "w_mbytes_per_sec": 0 00:22:50.693 }, 00:22:50.693 "claimed": true, 00:22:50.693 "claim_type": "exclusive_write", 00:22:50.693 "zoned": false, 00:22:50.693 "supported_io_types": { 00:22:50.693 "read": true, 00:22:50.693 "write": true, 00:22:50.693 "unmap": true, 00:22:50.693 "flush": true, 00:22:50.693 "reset": true, 00:22:50.693 "nvme_admin": false, 00:22:50.693 "nvme_io": false, 00:22:50.693 "nvme_io_md": false, 00:22:50.693 "write_zeroes": true, 00:22:50.693 "zcopy": true, 00:22:50.693 "get_zone_info": false, 00:22:50.693 "zone_management": false, 00:22:50.693 "zone_append": false, 00:22:50.693 "compare": false, 00:22:50.693 "compare_and_write": false, 00:22:50.693 "abort": true, 00:22:50.693 "seek_hole": false, 00:22:50.693 "seek_data": false, 00:22:50.693 "copy": true, 00:22:50.693 "nvme_iov_md": false 00:22:50.693 }, 00:22:50.693 "memory_domains": [ 00:22:50.693 { 00:22:50.693 "dma_device_id": "system", 00:22:50.693 "dma_device_type": 1 00:22:50.693 }, 00:22:50.693 { 00:22:50.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.693 "dma_device_type": 2 00:22:50.693 } 00:22:50.693 ], 00:22:50.693 "driver_specific": {} 00:22:50.693 } 00:22:50.693 ] 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.693 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:50.693 "name": "Existed_Raid", 00:22:50.693 "uuid": "4c89dfb5-d89d-423b-baf8-3195afdc4614", 00:22:50.693 "strip_size_kb": 64, 00:22:50.693 "state": "configuring", 00:22:50.693 "raid_level": "raid5f", 00:22:50.693 "superblock": true, 00:22:50.693 "num_base_bdevs": 4, 00:22:50.693 "num_base_bdevs_discovered": 3, 00:22:50.693 "num_base_bdevs_operational": 4, 00:22:50.693 "base_bdevs_list": [ 00:22:50.693 { 00:22:50.693 "name": "BaseBdev1", 00:22:50.693 "uuid": "6e6ca29e-0541-4b72-ac88-20b978768606", 00:22:50.693 "is_configured": true, 00:22:50.693 "data_offset": 2048, 00:22:50.693 "data_size": 63488 00:22:50.693 }, 00:22:50.693 { 00:22:50.693 "name": null, 00:22:50.693 "uuid": "0c286ecd-774c-4031-b824-32c828b7abae", 00:22:50.693 "is_configured": false, 00:22:50.693 "data_offset": 0, 00:22:50.693 "data_size": 63488 00:22:50.693 }, 00:22:50.693 { 00:22:50.693 "name": "BaseBdev3", 00:22:50.694 "uuid": "9d309b19-f968-48bb-8f47-63fbabd964f7", 00:22:50.694 "is_configured": true, 00:22:50.694 "data_offset": 2048, 00:22:50.694 "data_size": 63488 00:22:50.694 }, 00:22:50.694 { 00:22:50.694 "name": "BaseBdev4", 00:22:50.694 "uuid": "04003be5-03e4-49fe-9a01-36f8fc2ce2af", 00:22:50.694 "is_configured": true, 00:22:50.694 "data_offset": 2048, 00:22:50.694 "data_size": 63488 00:22:50.694 } 00:22:50.694 ] 00:22:50.694 }' 00:22:50.694 09:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:50.694 09:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.261 [2024-10-15 09:22:35.098976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:51.261 "name": "Existed_Raid", 00:22:51.261 "uuid": "4c89dfb5-d89d-423b-baf8-3195afdc4614", 00:22:51.261 "strip_size_kb": 64, 00:22:51.261 "state": "configuring", 00:22:51.261 "raid_level": "raid5f", 00:22:51.261 "superblock": true, 00:22:51.261 "num_base_bdevs": 4, 00:22:51.261 "num_base_bdevs_discovered": 2, 00:22:51.261 "num_base_bdevs_operational": 4, 00:22:51.261 "base_bdevs_list": [ 00:22:51.261 { 00:22:51.261 "name": "BaseBdev1", 00:22:51.261 "uuid": "6e6ca29e-0541-4b72-ac88-20b978768606", 00:22:51.261 "is_configured": true, 00:22:51.261 "data_offset": 2048, 00:22:51.261 "data_size": 63488 00:22:51.261 }, 00:22:51.261 { 00:22:51.261 "name": null, 00:22:51.261 "uuid": "0c286ecd-774c-4031-b824-32c828b7abae", 00:22:51.261 "is_configured": false, 00:22:51.261 "data_offset": 0, 00:22:51.261 "data_size": 63488 00:22:51.261 }, 00:22:51.261 { 00:22:51.261 "name": null, 00:22:51.261 "uuid": "9d309b19-f968-48bb-8f47-63fbabd964f7", 00:22:51.261 "is_configured": false, 00:22:51.261 "data_offset": 0, 00:22:51.261 "data_size": 63488 00:22:51.261 }, 00:22:51.261 { 00:22:51.261 "name": "BaseBdev4", 00:22:51.261 "uuid": "04003be5-03e4-49fe-9a01-36f8fc2ce2af", 00:22:51.261 "is_configured": true, 00:22:51.261 "data_offset": 2048, 00:22:51.261 "data_size": 63488 00:22:51.261 } 00:22:51.261 ] 00:22:51.261 }' 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:51.261 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.828 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.828 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.828 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.828 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:51.828 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.828 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:51.828 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:51.828 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.828 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.828 [2024-10-15 09:22:35.671172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:51.828 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.828 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:51.828 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:51.828 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:51.828 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:51.828 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:51.828 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:51.828 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:51.828 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:51.829 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:51.829 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:51.829 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.829 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.829 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.829 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.829 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.829 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:51.829 "name": "Existed_Raid", 00:22:51.829 "uuid": "4c89dfb5-d89d-423b-baf8-3195afdc4614", 00:22:51.829 "strip_size_kb": 64, 00:22:51.829 "state": "configuring", 00:22:51.829 "raid_level": "raid5f", 00:22:51.829 "superblock": true, 00:22:51.829 "num_base_bdevs": 4, 00:22:51.829 "num_base_bdevs_discovered": 3, 00:22:51.829 "num_base_bdevs_operational": 4, 00:22:51.829 "base_bdevs_list": [ 00:22:51.829 { 00:22:51.829 "name": "BaseBdev1", 00:22:51.829 "uuid": "6e6ca29e-0541-4b72-ac88-20b978768606", 00:22:51.829 "is_configured": true, 00:22:51.829 "data_offset": 2048, 00:22:51.829 "data_size": 63488 00:22:51.829 }, 00:22:51.829 { 00:22:51.829 "name": null, 00:22:51.829 "uuid": "0c286ecd-774c-4031-b824-32c828b7abae", 00:22:51.829 "is_configured": false, 00:22:51.829 "data_offset": 0, 00:22:51.829 "data_size": 63488 00:22:51.829 }, 00:22:51.829 { 00:22:51.829 "name": "BaseBdev3", 00:22:51.829 "uuid": "9d309b19-f968-48bb-8f47-63fbabd964f7", 00:22:51.829 "is_configured": true, 00:22:51.829 "data_offset": 2048, 00:22:51.829 "data_size": 63488 00:22:51.829 }, 00:22:51.829 { 00:22:51.829 "name": "BaseBdev4", 00:22:51.829 "uuid": "04003be5-03e4-49fe-9a01-36f8fc2ce2af", 00:22:51.829 "is_configured": true, 00:22:51.829 "data_offset": 2048, 00:22:51.829 "data_size": 63488 00:22:51.829 } 00:22:51.829 ] 00:22:51.829 }' 00:22:51.829 09:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:51.829 09:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.395 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.395 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.395 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.395 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:52.395 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.395 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:52.395 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:52.395 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.395 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.395 [2024-10-15 09:22:36.227354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:52.395 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.654 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:52.654 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:52.654 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:52.654 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:52.654 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:52.654 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:52.654 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:52.654 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:52.654 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:52.654 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:52.654 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.654 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.654 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.654 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.654 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.654 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.654 "name": "Existed_Raid", 00:22:52.654 "uuid": "4c89dfb5-d89d-423b-baf8-3195afdc4614", 00:22:52.654 "strip_size_kb": 64, 00:22:52.654 "state": "configuring", 00:22:52.654 "raid_level": "raid5f", 00:22:52.654 "superblock": true, 00:22:52.654 "num_base_bdevs": 4, 00:22:52.654 "num_base_bdevs_discovered": 2, 00:22:52.654 "num_base_bdevs_operational": 4, 00:22:52.654 "base_bdevs_list": [ 00:22:52.654 { 00:22:52.654 "name": null, 00:22:52.654 "uuid": "6e6ca29e-0541-4b72-ac88-20b978768606", 00:22:52.654 "is_configured": false, 00:22:52.654 "data_offset": 0, 00:22:52.654 "data_size": 63488 00:22:52.654 }, 00:22:52.654 { 00:22:52.654 "name": null, 00:22:52.654 "uuid": "0c286ecd-774c-4031-b824-32c828b7abae", 00:22:52.654 "is_configured": false, 00:22:52.654 "data_offset": 0, 00:22:52.654 "data_size": 63488 00:22:52.654 }, 00:22:52.654 { 00:22:52.654 "name": "BaseBdev3", 00:22:52.654 "uuid": "9d309b19-f968-48bb-8f47-63fbabd964f7", 00:22:52.654 "is_configured": true, 00:22:52.654 "data_offset": 2048, 00:22:52.654 "data_size": 63488 00:22:52.654 }, 00:22:52.654 { 00:22:52.654 "name": "BaseBdev4", 00:22:52.654 "uuid": "04003be5-03e4-49fe-9a01-36f8fc2ce2af", 00:22:52.654 "is_configured": true, 00:22:52.654 "data_offset": 2048, 00:22:52.654 "data_size": 63488 00:22:52.654 } 00:22:52.654 ] 00:22:52.654 }' 00:22:52.654 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.654 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.913 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.913 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.913 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.913 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:52.913 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.171 [2024-10-15 09:22:36.881207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.171 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.171 "name": "Existed_Raid", 00:22:53.171 "uuid": "4c89dfb5-d89d-423b-baf8-3195afdc4614", 00:22:53.171 "strip_size_kb": 64, 00:22:53.171 "state": "configuring", 00:22:53.171 "raid_level": "raid5f", 00:22:53.171 "superblock": true, 00:22:53.171 "num_base_bdevs": 4, 00:22:53.171 "num_base_bdevs_discovered": 3, 00:22:53.171 "num_base_bdevs_operational": 4, 00:22:53.171 "base_bdevs_list": [ 00:22:53.171 { 00:22:53.171 "name": null, 00:22:53.171 "uuid": "6e6ca29e-0541-4b72-ac88-20b978768606", 00:22:53.171 "is_configured": false, 00:22:53.171 "data_offset": 0, 00:22:53.171 "data_size": 63488 00:22:53.171 }, 00:22:53.171 { 00:22:53.171 "name": "BaseBdev2", 00:22:53.171 "uuid": "0c286ecd-774c-4031-b824-32c828b7abae", 00:22:53.171 "is_configured": true, 00:22:53.171 "data_offset": 2048, 00:22:53.171 "data_size": 63488 00:22:53.171 }, 00:22:53.171 { 00:22:53.171 "name": "BaseBdev3", 00:22:53.171 "uuid": "9d309b19-f968-48bb-8f47-63fbabd964f7", 00:22:53.171 "is_configured": true, 00:22:53.172 "data_offset": 2048, 00:22:53.172 "data_size": 63488 00:22:53.172 }, 00:22:53.172 { 00:22:53.172 "name": "BaseBdev4", 00:22:53.172 "uuid": "04003be5-03e4-49fe-9a01-36f8fc2ce2af", 00:22:53.172 "is_configured": true, 00:22:53.172 "data_offset": 2048, 00:22:53.172 "data_size": 63488 00:22:53.172 } 00:22:53.172 ] 00:22:53.172 }' 00:22:53.172 09:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.172 09:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6e6ca29e-0541-4b72-ac88-20b978768606 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.738 [2024-10-15 09:22:37.531414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:53.738 [2024-10-15 09:22:37.531971] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:53.738 [2024-10-15 09:22:37.531998] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:53.738 NewBaseBdev 00:22:53.738 [2024-10-15 09:22:37.532365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.738 [2024-10-15 09:22:37.538957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:53.738 [2024-10-15 09:22:37.538990] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:53.738 [2024-10-15 09:22:37.539365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.738 [ 00:22:53.738 { 00:22:53.738 "name": "NewBaseBdev", 00:22:53.738 "aliases": [ 00:22:53.738 "6e6ca29e-0541-4b72-ac88-20b978768606" 00:22:53.738 ], 00:22:53.738 "product_name": "Malloc disk", 00:22:53.738 "block_size": 512, 00:22:53.738 "num_blocks": 65536, 00:22:53.738 "uuid": "6e6ca29e-0541-4b72-ac88-20b978768606", 00:22:53.738 "assigned_rate_limits": { 00:22:53.738 "rw_ios_per_sec": 0, 00:22:53.738 "rw_mbytes_per_sec": 0, 00:22:53.738 "r_mbytes_per_sec": 0, 00:22:53.738 "w_mbytes_per_sec": 0 00:22:53.738 }, 00:22:53.738 "claimed": true, 00:22:53.738 "claim_type": "exclusive_write", 00:22:53.738 "zoned": false, 00:22:53.738 "supported_io_types": { 00:22:53.738 "read": true, 00:22:53.738 "write": true, 00:22:53.738 "unmap": true, 00:22:53.738 "flush": true, 00:22:53.738 "reset": true, 00:22:53.738 "nvme_admin": false, 00:22:53.738 "nvme_io": false, 00:22:53.738 "nvme_io_md": false, 00:22:53.738 "write_zeroes": true, 00:22:53.738 "zcopy": true, 00:22:53.738 "get_zone_info": false, 00:22:53.738 "zone_management": false, 00:22:53.738 "zone_append": false, 00:22:53.738 "compare": false, 00:22:53.738 "compare_and_write": false, 00:22:53.738 "abort": true, 00:22:53.738 "seek_hole": false, 00:22:53.738 "seek_data": false, 00:22:53.738 "copy": true, 00:22:53.738 "nvme_iov_md": false 00:22:53.738 }, 00:22:53.738 "memory_domains": [ 00:22:53.738 { 00:22:53.738 "dma_device_id": "system", 00:22:53.738 "dma_device_type": 1 00:22:53.738 }, 00:22:53.738 { 00:22:53.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.738 "dma_device_type": 2 00:22:53.738 } 00:22:53.738 ], 00:22:53.738 "driver_specific": {} 00:22:53.738 } 00:22:53.738 ] 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.738 "name": "Existed_Raid", 00:22:53.738 "uuid": "4c89dfb5-d89d-423b-baf8-3195afdc4614", 00:22:53.738 "strip_size_kb": 64, 00:22:53.738 "state": "online", 00:22:53.738 "raid_level": "raid5f", 00:22:53.738 "superblock": true, 00:22:53.738 "num_base_bdevs": 4, 00:22:53.738 "num_base_bdevs_discovered": 4, 00:22:53.738 "num_base_bdevs_operational": 4, 00:22:53.738 "base_bdevs_list": [ 00:22:53.738 { 00:22:53.738 "name": "NewBaseBdev", 00:22:53.738 "uuid": "6e6ca29e-0541-4b72-ac88-20b978768606", 00:22:53.738 "is_configured": true, 00:22:53.738 "data_offset": 2048, 00:22:53.738 "data_size": 63488 00:22:53.738 }, 00:22:53.738 { 00:22:53.738 "name": "BaseBdev2", 00:22:53.738 "uuid": "0c286ecd-774c-4031-b824-32c828b7abae", 00:22:53.738 "is_configured": true, 00:22:53.738 "data_offset": 2048, 00:22:53.738 "data_size": 63488 00:22:53.738 }, 00:22:53.738 { 00:22:53.738 "name": "BaseBdev3", 00:22:53.738 "uuid": "9d309b19-f968-48bb-8f47-63fbabd964f7", 00:22:53.738 "is_configured": true, 00:22:53.738 "data_offset": 2048, 00:22:53.738 "data_size": 63488 00:22:53.738 }, 00:22:53.738 { 00:22:53.738 "name": "BaseBdev4", 00:22:53.738 "uuid": "04003be5-03e4-49fe-9a01-36f8fc2ce2af", 00:22:53.738 "is_configured": true, 00:22:53.738 "data_offset": 2048, 00:22:53.738 "data_size": 63488 00:22:53.738 } 00:22:53.738 ] 00:22:53.738 }' 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.738 09:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.315 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:54.315 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:54.315 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:54.315 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:54.316 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:54.316 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:54.316 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:54.316 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:54.316 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.316 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.316 [2024-10-15 09:22:38.096434] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:54.316 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.316 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:54.316 "name": "Existed_Raid", 00:22:54.316 "aliases": [ 00:22:54.316 "4c89dfb5-d89d-423b-baf8-3195afdc4614" 00:22:54.316 ], 00:22:54.316 "product_name": "Raid Volume", 00:22:54.316 "block_size": 512, 00:22:54.316 "num_blocks": 190464, 00:22:54.316 "uuid": "4c89dfb5-d89d-423b-baf8-3195afdc4614", 00:22:54.316 "assigned_rate_limits": { 00:22:54.316 "rw_ios_per_sec": 0, 00:22:54.316 "rw_mbytes_per_sec": 0, 00:22:54.316 "r_mbytes_per_sec": 0, 00:22:54.316 "w_mbytes_per_sec": 0 00:22:54.316 }, 00:22:54.316 "claimed": false, 00:22:54.316 "zoned": false, 00:22:54.316 "supported_io_types": { 00:22:54.316 "read": true, 00:22:54.316 "write": true, 00:22:54.316 "unmap": false, 00:22:54.316 "flush": false, 00:22:54.316 "reset": true, 00:22:54.316 "nvme_admin": false, 00:22:54.316 "nvme_io": false, 00:22:54.316 "nvme_io_md": false, 00:22:54.316 "write_zeroes": true, 00:22:54.316 "zcopy": false, 00:22:54.316 "get_zone_info": false, 00:22:54.316 "zone_management": false, 00:22:54.316 "zone_append": false, 00:22:54.316 "compare": false, 00:22:54.316 "compare_and_write": false, 00:22:54.316 "abort": false, 00:22:54.316 "seek_hole": false, 00:22:54.316 "seek_data": false, 00:22:54.316 "copy": false, 00:22:54.316 "nvme_iov_md": false 00:22:54.316 }, 00:22:54.316 "driver_specific": { 00:22:54.316 "raid": { 00:22:54.316 "uuid": "4c89dfb5-d89d-423b-baf8-3195afdc4614", 00:22:54.316 "strip_size_kb": 64, 00:22:54.316 "state": "online", 00:22:54.316 "raid_level": "raid5f", 00:22:54.316 "superblock": true, 00:22:54.316 "num_base_bdevs": 4, 00:22:54.316 "num_base_bdevs_discovered": 4, 00:22:54.316 "num_base_bdevs_operational": 4, 00:22:54.316 "base_bdevs_list": [ 00:22:54.316 { 00:22:54.316 "name": "NewBaseBdev", 00:22:54.316 "uuid": "6e6ca29e-0541-4b72-ac88-20b978768606", 00:22:54.316 "is_configured": true, 00:22:54.316 "data_offset": 2048, 00:22:54.316 "data_size": 63488 00:22:54.316 }, 00:22:54.316 { 00:22:54.316 "name": "BaseBdev2", 00:22:54.316 "uuid": "0c286ecd-774c-4031-b824-32c828b7abae", 00:22:54.316 "is_configured": true, 00:22:54.316 "data_offset": 2048, 00:22:54.316 "data_size": 63488 00:22:54.316 }, 00:22:54.316 { 00:22:54.316 "name": "BaseBdev3", 00:22:54.316 "uuid": "9d309b19-f968-48bb-8f47-63fbabd964f7", 00:22:54.316 "is_configured": true, 00:22:54.316 "data_offset": 2048, 00:22:54.316 "data_size": 63488 00:22:54.316 }, 00:22:54.316 { 00:22:54.316 "name": "BaseBdev4", 00:22:54.316 "uuid": "04003be5-03e4-49fe-9a01-36f8fc2ce2af", 00:22:54.316 "is_configured": true, 00:22:54.316 "data_offset": 2048, 00:22:54.316 "data_size": 63488 00:22:54.316 } 00:22:54.316 ] 00:22:54.316 } 00:22:54.316 } 00:22:54.316 }' 00:22:54.316 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:54.316 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:54.316 BaseBdev2 00:22:54.316 BaseBdev3 00:22:54.316 BaseBdev4' 00:22:54.316 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:54.573 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.574 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:54.574 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.574 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.574 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.574 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.574 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:54.574 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:54.574 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:54.574 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.574 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.574 [2024-10-15 09:22:38.480210] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:54.574 [2024-10-15 09:22:38.480400] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:54.574 [2024-10-15 09:22:38.480650] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:54.574 [2024-10-15 09:22:38.481226] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:54.574 [2024-10-15 09:22:38.481255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:54.574 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.574 09:22:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84148 00:22:54.574 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84148 ']' 00:22:54.574 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84148 00:22:54.574 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:22:54.574 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:54.574 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84148 00:22:54.832 killing process with pid 84148 00:22:54.832 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:54.832 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:54.832 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84148' 00:22:54.832 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84148 00:22:54.832 [2024-10-15 09:22:38.521949] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:54.832 09:22:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84148 00:22:55.091 [2024-10-15 09:22:38.914838] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:56.467 09:22:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:22:56.467 00:22:56.467 real 0m13.060s 00:22:56.467 user 0m21.330s 00:22:56.467 sys 0m1.966s 00:22:56.467 09:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:56.467 09:22:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.468 ************************************ 00:22:56.468 END TEST raid5f_state_function_test_sb 00:22:56.468 ************************************ 00:22:56.468 09:22:40 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:22:56.468 09:22:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:56.468 09:22:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:56.468 09:22:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:56.468 ************************************ 00:22:56.468 START TEST raid5f_superblock_test 00:22:56.468 ************************************ 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84830 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84830 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 84830 ']' 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:56.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:56.468 09:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.468 [2024-10-15 09:22:40.260061] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:22:56.468 [2024-10-15 09:22:40.260308] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84830 ] 00:22:56.730 [2024-10-15 09:22:40.439624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.730 [2024-10-15 09:22:40.598235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.990 [2024-10-15 09:22:40.834491] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:56.990 [2024-10-15 09:22:40.834580] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.560 malloc1 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.560 [2024-10-15 09:22:41.377937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:57.560 [2024-10-15 09:22:41.378220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.560 [2024-10-15 09:22:41.378280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:57.560 [2024-10-15 09:22:41.378297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.560 [2024-10-15 09:22:41.381419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.560 pt1 00:22:57.560 [2024-10-15 09:22:41.381623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.560 malloc2 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.560 [2024-10-15 09:22:41.433223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:57.560 [2024-10-15 09:22:41.433462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.560 [2024-10-15 09:22:41.433549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:57.560 [2024-10-15 09:22:41.433770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.560 [2024-10-15 09:22:41.436800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.560 [2024-10-15 09:22:41.436840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:57.560 pt2 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.560 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.821 malloc3 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.821 [2024-10-15 09:22:41.505470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:57.821 [2024-10-15 09:22:41.505731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.821 [2024-10-15 09:22:41.505783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:57.821 [2024-10-15 09:22:41.505801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.821 [2024-10-15 09:22:41.508832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.821 [2024-10-15 09:22:41.508875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:57.821 pt3 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.821 malloc4 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.821 [2024-10-15 09:22:41.565393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:57.821 [2024-10-15 09:22:41.565602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.821 [2024-10-15 09:22:41.565794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:57.821 [2024-10-15 09:22:41.565823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.821 [2024-10-15 09:22:41.569009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.821 [2024-10-15 09:22:41.569187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:57.821 pt4 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.821 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.821 [2024-10-15 09:22:41.577640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:57.821 [2024-10-15 09:22:41.580474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:57.821 [2024-10-15 09:22:41.580703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:57.822 [2024-10-15 09:22:41.580819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:57.822 [2024-10-15 09:22:41.581145] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:57.822 [2024-10-15 09:22:41.581166] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:57.822 [2024-10-15 09:22:41.581568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:57.822 [2024-10-15 09:22:41.588610] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:57.822 [2024-10-15 09:22:41.588640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:57.822 [2024-10-15 09:22:41.588896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:57.822 "name": "raid_bdev1", 00:22:57.822 "uuid": "6cde0e7a-8c36-42e7-967d-06ddaae028eb", 00:22:57.822 "strip_size_kb": 64, 00:22:57.822 "state": "online", 00:22:57.822 "raid_level": "raid5f", 00:22:57.822 "superblock": true, 00:22:57.822 "num_base_bdevs": 4, 00:22:57.822 "num_base_bdevs_discovered": 4, 00:22:57.822 "num_base_bdevs_operational": 4, 00:22:57.822 "base_bdevs_list": [ 00:22:57.822 { 00:22:57.822 "name": "pt1", 00:22:57.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:57.822 "is_configured": true, 00:22:57.822 "data_offset": 2048, 00:22:57.822 "data_size": 63488 00:22:57.822 }, 00:22:57.822 { 00:22:57.822 "name": "pt2", 00:22:57.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:57.822 "is_configured": true, 00:22:57.822 "data_offset": 2048, 00:22:57.822 "data_size": 63488 00:22:57.822 }, 00:22:57.822 { 00:22:57.822 "name": "pt3", 00:22:57.822 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:57.822 "is_configured": true, 00:22:57.822 "data_offset": 2048, 00:22:57.822 "data_size": 63488 00:22:57.822 }, 00:22:57.822 { 00:22:57.822 "name": "pt4", 00:22:57.822 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:57.822 "is_configured": true, 00:22:57.822 "data_offset": 2048, 00:22:57.822 "data_size": 63488 00:22:57.822 } 00:22:57.822 ] 00:22:57.822 }' 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:57.822 09:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.390 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:58.390 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:58.390 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:58.390 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:58.390 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:58.390 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:58.390 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:58.390 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.390 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.390 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:58.390 [2024-10-15 09:22:42.133560] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:58.390 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.390 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:58.390 "name": "raid_bdev1", 00:22:58.390 "aliases": [ 00:22:58.390 "6cde0e7a-8c36-42e7-967d-06ddaae028eb" 00:22:58.390 ], 00:22:58.390 "product_name": "Raid Volume", 00:22:58.390 "block_size": 512, 00:22:58.390 "num_blocks": 190464, 00:22:58.390 "uuid": "6cde0e7a-8c36-42e7-967d-06ddaae028eb", 00:22:58.390 "assigned_rate_limits": { 00:22:58.390 "rw_ios_per_sec": 0, 00:22:58.390 "rw_mbytes_per_sec": 0, 00:22:58.390 "r_mbytes_per_sec": 0, 00:22:58.390 "w_mbytes_per_sec": 0 00:22:58.390 }, 00:22:58.390 "claimed": false, 00:22:58.390 "zoned": false, 00:22:58.390 "supported_io_types": { 00:22:58.390 "read": true, 00:22:58.390 "write": true, 00:22:58.390 "unmap": false, 00:22:58.390 "flush": false, 00:22:58.390 "reset": true, 00:22:58.390 "nvme_admin": false, 00:22:58.390 "nvme_io": false, 00:22:58.390 "nvme_io_md": false, 00:22:58.390 "write_zeroes": true, 00:22:58.390 "zcopy": false, 00:22:58.390 "get_zone_info": false, 00:22:58.390 "zone_management": false, 00:22:58.390 "zone_append": false, 00:22:58.390 "compare": false, 00:22:58.390 "compare_and_write": false, 00:22:58.390 "abort": false, 00:22:58.390 "seek_hole": false, 00:22:58.390 "seek_data": false, 00:22:58.390 "copy": false, 00:22:58.390 "nvme_iov_md": false 00:22:58.390 }, 00:22:58.390 "driver_specific": { 00:22:58.390 "raid": { 00:22:58.390 "uuid": "6cde0e7a-8c36-42e7-967d-06ddaae028eb", 00:22:58.390 "strip_size_kb": 64, 00:22:58.390 "state": "online", 00:22:58.390 "raid_level": "raid5f", 00:22:58.390 "superblock": true, 00:22:58.390 "num_base_bdevs": 4, 00:22:58.390 "num_base_bdevs_discovered": 4, 00:22:58.390 "num_base_bdevs_operational": 4, 00:22:58.390 "base_bdevs_list": [ 00:22:58.391 { 00:22:58.391 "name": "pt1", 00:22:58.391 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:58.391 "is_configured": true, 00:22:58.391 "data_offset": 2048, 00:22:58.391 "data_size": 63488 00:22:58.391 }, 00:22:58.391 { 00:22:58.391 "name": "pt2", 00:22:58.391 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:58.391 "is_configured": true, 00:22:58.391 "data_offset": 2048, 00:22:58.391 "data_size": 63488 00:22:58.391 }, 00:22:58.391 { 00:22:58.391 "name": "pt3", 00:22:58.391 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:58.391 "is_configured": true, 00:22:58.391 "data_offset": 2048, 00:22:58.391 "data_size": 63488 00:22:58.391 }, 00:22:58.391 { 00:22:58.391 "name": "pt4", 00:22:58.391 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:58.391 "is_configured": true, 00:22:58.391 "data_offset": 2048, 00:22:58.391 "data_size": 63488 00:22:58.391 } 00:22:58.391 ] 00:22:58.391 } 00:22:58.391 } 00:22:58.391 }' 00:22:58.391 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:58.391 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:58.391 pt2 00:22:58.391 pt3 00:22:58.391 pt4' 00:22:58.391 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:58.391 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:58.391 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:58.391 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:58.391 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:58.391 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.391 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.391 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.650 [2024-10-15 09:22:42.517539] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6cde0e7a-8c36-42e7-967d-06ddaae028eb 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6cde0e7a-8c36-42e7-967d-06ddaae028eb ']' 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.650 [2024-10-15 09:22:42.569357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:58.650 [2024-10-15 09:22:42.569590] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:58.650 [2024-10-15 09:22:42.569743] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:58.650 [2024-10-15 09:22:42.569884] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:58.650 [2024-10-15 09:22:42.569912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:58.650 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.910 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.910 [2024-10-15 09:22:42.729459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:58.910 request: 00:22:58.910 { 00:22:58.910 "name": "raid_bdev1", 00:22:58.910 "raid_level": "raid5f", 00:22:58.910 "base_bdevs": [ 00:22:58.910 "malloc1", 00:22:58.910 "malloc2", 00:22:58.910 "malloc3", 00:22:58.910 "malloc4" 00:22:58.910 ], 00:22:58.910 "strip_size_kb": 64, 00:22:58.910 "superblock": false, 00:22:58.910 "method": "bdev_raid_create", 00:22:58.910 "req_id": 1 00:22:58.910 } 00:22:58.910 Got JSON-RPC error response 00:22:58.910 response: 00:22:58.910 { 00:22:58.910 "code": -17, 00:22:58.910 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:58.910 } 00:22:58.910 [2024-10-15 09:22:42.732589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:58.910 [2024-10-15 09:22:42.732669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:58.910 [2024-10-15 09:22:42.732726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:58.910 [2024-10-15 09:22:42.732807] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:58.910 [2024-10-15 09:22:42.732891] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:58.910 [2024-10-15 09:22:42.732927] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:58.910 [2024-10-15 09:22:42.732961] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:58.910 [2024-10-15 09:22:42.732985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:58.911 [2024-10-15 09:22:42.733002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.911 [2024-10-15 09:22:42.797640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:58.911 [2024-10-15 09:22:42.797752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.911 [2024-10-15 09:22:42.797782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:58.911 [2024-10-15 09:22:42.797800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.911 [2024-10-15 09:22:42.801123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.911 [2024-10-15 09:22:42.801339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:58.911 [2024-10-15 09:22:42.801574] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:58.911 [2024-10-15 09:22:42.801772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:58.911 pt1 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.911 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.170 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.170 "name": "raid_bdev1", 00:22:59.170 "uuid": "6cde0e7a-8c36-42e7-967d-06ddaae028eb", 00:22:59.170 "strip_size_kb": 64, 00:22:59.170 "state": "configuring", 00:22:59.170 "raid_level": "raid5f", 00:22:59.170 "superblock": true, 00:22:59.170 "num_base_bdevs": 4, 00:22:59.170 "num_base_bdevs_discovered": 1, 00:22:59.170 "num_base_bdevs_operational": 4, 00:22:59.170 "base_bdevs_list": [ 00:22:59.170 { 00:22:59.170 "name": "pt1", 00:22:59.170 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:59.170 "is_configured": true, 00:22:59.170 "data_offset": 2048, 00:22:59.170 "data_size": 63488 00:22:59.170 }, 00:22:59.170 { 00:22:59.170 "name": null, 00:22:59.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:59.170 "is_configured": false, 00:22:59.170 "data_offset": 2048, 00:22:59.170 "data_size": 63488 00:22:59.170 }, 00:22:59.170 { 00:22:59.170 "name": null, 00:22:59.170 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:59.170 "is_configured": false, 00:22:59.170 "data_offset": 2048, 00:22:59.170 "data_size": 63488 00:22:59.170 }, 00:22:59.170 { 00:22:59.170 "name": null, 00:22:59.170 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:59.170 "is_configured": false, 00:22:59.170 "data_offset": 2048, 00:22:59.170 "data_size": 63488 00:22:59.170 } 00:22:59.170 ] 00:22:59.170 }' 00:22:59.170 09:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.170 09:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.428 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:22:59.428 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:59.428 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.428 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.428 [2024-10-15 09:22:43.313852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:59.428 [2024-10-15 09:22:43.314106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.428 [2024-10-15 09:22:43.314182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:59.428 [2024-10-15 09:22:43.314216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.428 [2024-10-15 09:22:43.314940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.428 [2024-10-15 09:22:43.314982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:59.429 [2024-10-15 09:22:43.315099] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:59.429 [2024-10-15 09:22:43.315169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:59.429 pt2 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.429 [2024-10-15 09:22:43.321835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.429 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.687 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.687 "name": "raid_bdev1", 00:22:59.687 "uuid": "6cde0e7a-8c36-42e7-967d-06ddaae028eb", 00:22:59.687 "strip_size_kb": 64, 00:22:59.687 "state": "configuring", 00:22:59.687 "raid_level": "raid5f", 00:22:59.687 "superblock": true, 00:22:59.687 "num_base_bdevs": 4, 00:22:59.687 "num_base_bdevs_discovered": 1, 00:22:59.687 "num_base_bdevs_operational": 4, 00:22:59.687 "base_bdevs_list": [ 00:22:59.687 { 00:22:59.687 "name": "pt1", 00:22:59.687 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:59.687 "is_configured": true, 00:22:59.687 "data_offset": 2048, 00:22:59.687 "data_size": 63488 00:22:59.687 }, 00:22:59.687 { 00:22:59.687 "name": null, 00:22:59.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:59.687 "is_configured": false, 00:22:59.687 "data_offset": 0, 00:22:59.687 "data_size": 63488 00:22:59.687 }, 00:22:59.687 { 00:22:59.687 "name": null, 00:22:59.687 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:59.687 "is_configured": false, 00:22:59.687 "data_offset": 2048, 00:22:59.687 "data_size": 63488 00:22:59.687 }, 00:22:59.687 { 00:22:59.687 "name": null, 00:22:59.687 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:59.687 "is_configured": false, 00:22:59.687 "data_offset": 2048, 00:22:59.687 "data_size": 63488 00:22:59.687 } 00:22:59.687 ] 00:22:59.687 }' 00:22:59.687 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.687 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.946 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:59.946 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:59.946 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:59.946 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.946 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.946 [2024-10-15 09:22:43.802072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:59.946 [2024-10-15 09:22:43.802346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.946 [2024-10-15 09:22:43.802484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:59.946 [2024-10-15 09:22:43.802510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.946 [2024-10-15 09:22:43.803212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.946 [2024-10-15 09:22:43.803240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:59.946 [2024-10-15 09:22:43.803366] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:59.946 [2024-10-15 09:22:43.803401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:59.946 pt2 00:22:59.946 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.946 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:59.946 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:59.946 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:59.946 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.946 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.946 [2024-10-15 09:22:43.813961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:59.946 [2024-10-15 09:22:43.814037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.946 [2024-10-15 09:22:43.814070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:59.946 [2024-10-15 09:22:43.814084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.946 [2024-10-15 09:22:43.814624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.946 [2024-10-15 09:22:43.814667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:59.947 [2024-10-15 09:22:43.814763] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:59.947 [2024-10-15 09:22:43.814794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:59.947 pt3 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.947 [2024-10-15 09:22:43.821932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:59.947 [2024-10-15 09:22:43.822156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.947 [2024-10-15 09:22:43.822235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:59.947 [2024-10-15 09:22:43.822394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.947 [2024-10-15 09:22:43.822957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.947 [2024-10-15 09:22:43.823128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:59.947 [2024-10-15 09:22:43.823353] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:59.947 [2024-10-15 09:22:43.823427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:59.947 [2024-10-15 09:22:43.823777] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:59.947 [2024-10-15 09:22:43.823982] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:59.947 [2024-10-15 09:22:43.824390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:59.947 pt4 00:22:59.947 [2024-10-15 09:22:43.830987] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:59.947 [2024-10-15 09:22:43.831020] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:59.947 [2024-10-15 09:22:43.831279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.947 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.207 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:00.207 "name": "raid_bdev1", 00:23:00.207 "uuid": "6cde0e7a-8c36-42e7-967d-06ddaae028eb", 00:23:00.207 "strip_size_kb": 64, 00:23:00.207 "state": "online", 00:23:00.207 "raid_level": "raid5f", 00:23:00.207 "superblock": true, 00:23:00.207 "num_base_bdevs": 4, 00:23:00.207 "num_base_bdevs_discovered": 4, 00:23:00.207 "num_base_bdevs_operational": 4, 00:23:00.207 "base_bdevs_list": [ 00:23:00.207 { 00:23:00.207 "name": "pt1", 00:23:00.207 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:00.207 "is_configured": true, 00:23:00.207 "data_offset": 2048, 00:23:00.207 "data_size": 63488 00:23:00.207 }, 00:23:00.207 { 00:23:00.207 "name": "pt2", 00:23:00.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:00.207 "is_configured": true, 00:23:00.207 "data_offset": 2048, 00:23:00.207 "data_size": 63488 00:23:00.207 }, 00:23:00.207 { 00:23:00.207 "name": "pt3", 00:23:00.207 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:00.207 "is_configured": true, 00:23:00.207 "data_offset": 2048, 00:23:00.207 "data_size": 63488 00:23:00.207 }, 00:23:00.207 { 00:23:00.207 "name": "pt4", 00:23:00.207 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:00.207 "is_configured": true, 00:23:00.207 "data_offset": 2048, 00:23:00.207 "data_size": 63488 00:23:00.207 } 00:23:00.207 ] 00:23:00.207 }' 00:23:00.207 09:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:00.207 09:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.466 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:00.466 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:00.466 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:00.466 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:00.466 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:00.466 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:00.466 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:00.466 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.466 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.466 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:00.466 [2024-10-15 09:22:44.335836] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:00.466 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.466 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:00.466 "name": "raid_bdev1", 00:23:00.466 "aliases": [ 00:23:00.466 "6cde0e7a-8c36-42e7-967d-06ddaae028eb" 00:23:00.466 ], 00:23:00.466 "product_name": "Raid Volume", 00:23:00.466 "block_size": 512, 00:23:00.466 "num_blocks": 190464, 00:23:00.466 "uuid": "6cde0e7a-8c36-42e7-967d-06ddaae028eb", 00:23:00.466 "assigned_rate_limits": { 00:23:00.466 "rw_ios_per_sec": 0, 00:23:00.466 "rw_mbytes_per_sec": 0, 00:23:00.466 "r_mbytes_per_sec": 0, 00:23:00.466 "w_mbytes_per_sec": 0 00:23:00.466 }, 00:23:00.466 "claimed": false, 00:23:00.466 "zoned": false, 00:23:00.466 "supported_io_types": { 00:23:00.466 "read": true, 00:23:00.466 "write": true, 00:23:00.466 "unmap": false, 00:23:00.466 "flush": false, 00:23:00.466 "reset": true, 00:23:00.466 "nvme_admin": false, 00:23:00.466 "nvme_io": false, 00:23:00.466 "nvme_io_md": false, 00:23:00.466 "write_zeroes": true, 00:23:00.466 "zcopy": false, 00:23:00.466 "get_zone_info": false, 00:23:00.466 "zone_management": false, 00:23:00.466 "zone_append": false, 00:23:00.466 "compare": false, 00:23:00.466 "compare_and_write": false, 00:23:00.466 "abort": false, 00:23:00.466 "seek_hole": false, 00:23:00.466 "seek_data": false, 00:23:00.466 "copy": false, 00:23:00.466 "nvme_iov_md": false 00:23:00.466 }, 00:23:00.466 "driver_specific": { 00:23:00.466 "raid": { 00:23:00.466 "uuid": "6cde0e7a-8c36-42e7-967d-06ddaae028eb", 00:23:00.466 "strip_size_kb": 64, 00:23:00.466 "state": "online", 00:23:00.466 "raid_level": "raid5f", 00:23:00.466 "superblock": true, 00:23:00.466 "num_base_bdevs": 4, 00:23:00.466 "num_base_bdevs_discovered": 4, 00:23:00.466 "num_base_bdevs_operational": 4, 00:23:00.466 "base_bdevs_list": [ 00:23:00.466 { 00:23:00.466 "name": "pt1", 00:23:00.466 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:00.466 "is_configured": true, 00:23:00.466 "data_offset": 2048, 00:23:00.466 "data_size": 63488 00:23:00.466 }, 00:23:00.466 { 00:23:00.466 "name": "pt2", 00:23:00.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:00.466 "is_configured": true, 00:23:00.466 "data_offset": 2048, 00:23:00.466 "data_size": 63488 00:23:00.466 }, 00:23:00.466 { 00:23:00.466 "name": "pt3", 00:23:00.466 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:00.466 "is_configured": true, 00:23:00.466 "data_offset": 2048, 00:23:00.466 "data_size": 63488 00:23:00.466 }, 00:23:00.466 { 00:23:00.466 "name": "pt4", 00:23:00.466 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:00.466 "is_configured": true, 00:23:00.466 "data_offset": 2048, 00:23:00.466 "data_size": 63488 00:23:00.466 } 00:23:00.466 ] 00:23:00.466 } 00:23:00.466 } 00:23:00.466 }' 00:23:00.466 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:00.725 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:00.725 pt2 00:23:00.725 pt3 00:23:00.725 pt4' 00:23:00.725 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:00.725 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:00.725 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:00.725 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:00.725 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:00.725 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.725 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.725 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.726 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.031 [2024-10-15 09:22:44.675863] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6cde0e7a-8c36-42e7-967d-06ddaae028eb '!=' 6cde0e7a-8c36-42e7-967d-06ddaae028eb ']' 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.031 [2024-10-15 09:22:44.719736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.031 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:01.031 "name": "raid_bdev1", 00:23:01.031 "uuid": "6cde0e7a-8c36-42e7-967d-06ddaae028eb", 00:23:01.032 "strip_size_kb": 64, 00:23:01.032 "state": "online", 00:23:01.032 "raid_level": "raid5f", 00:23:01.032 "superblock": true, 00:23:01.032 "num_base_bdevs": 4, 00:23:01.032 "num_base_bdevs_discovered": 3, 00:23:01.032 "num_base_bdevs_operational": 3, 00:23:01.032 "base_bdevs_list": [ 00:23:01.032 { 00:23:01.032 "name": null, 00:23:01.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.032 "is_configured": false, 00:23:01.032 "data_offset": 0, 00:23:01.032 "data_size": 63488 00:23:01.032 }, 00:23:01.032 { 00:23:01.032 "name": "pt2", 00:23:01.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:01.032 "is_configured": true, 00:23:01.032 "data_offset": 2048, 00:23:01.032 "data_size": 63488 00:23:01.032 }, 00:23:01.032 { 00:23:01.032 "name": "pt3", 00:23:01.032 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:01.032 "is_configured": true, 00:23:01.032 "data_offset": 2048, 00:23:01.032 "data_size": 63488 00:23:01.032 }, 00:23:01.032 { 00:23:01.032 "name": "pt4", 00:23:01.032 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:01.032 "is_configured": true, 00:23:01.032 "data_offset": 2048, 00:23:01.032 "data_size": 63488 00:23:01.032 } 00:23:01.032 ] 00:23:01.032 }' 00:23:01.032 09:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:01.032 09:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.599 [2024-10-15 09:22:45.239759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:01.599 [2024-10-15 09:22:45.239802] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:01.599 [2024-10-15 09:22:45.239919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:01.599 [2024-10-15 09:22:45.240036] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:01.599 [2024-10-15 09:22:45.240054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.599 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.599 [2024-10-15 09:22:45.327726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:01.599 [2024-10-15 09:22:45.327971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.599 [2024-10-15 09:22:45.328051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:01.599 [2024-10-15 09:22:45.328210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.599 [2024-10-15 09:22:45.331444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.599 [2024-10-15 09:22:45.331490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:01.599 [2024-10-15 09:22:45.331601] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:01.600 [2024-10-15 09:22:45.331665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:01.600 pt2 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:01.600 "name": "raid_bdev1", 00:23:01.600 "uuid": "6cde0e7a-8c36-42e7-967d-06ddaae028eb", 00:23:01.600 "strip_size_kb": 64, 00:23:01.600 "state": "configuring", 00:23:01.600 "raid_level": "raid5f", 00:23:01.600 "superblock": true, 00:23:01.600 "num_base_bdevs": 4, 00:23:01.600 "num_base_bdevs_discovered": 1, 00:23:01.600 "num_base_bdevs_operational": 3, 00:23:01.600 "base_bdevs_list": [ 00:23:01.600 { 00:23:01.600 "name": null, 00:23:01.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.600 "is_configured": false, 00:23:01.600 "data_offset": 2048, 00:23:01.600 "data_size": 63488 00:23:01.600 }, 00:23:01.600 { 00:23:01.600 "name": "pt2", 00:23:01.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:01.600 "is_configured": true, 00:23:01.600 "data_offset": 2048, 00:23:01.600 "data_size": 63488 00:23:01.600 }, 00:23:01.600 { 00:23:01.600 "name": null, 00:23:01.600 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:01.600 "is_configured": false, 00:23:01.600 "data_offset": 2048, 00:23:01.600 "data_size": 63488 00:23:01.600 }, 00:23:01.600 { 00:23:01.600 "name": null, 00:23:01.600 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:01.600 "is_configured": false, 00:23:01.600 "data_offset": 2048, 00:23:01.600 "data_size": 63488 00:23:01.600 } 00:23:01.600 ] 00:23:01.600 }' 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:01.600 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.165 [2024-10-15 09:22:45.856143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:02.165 [2024-10-15 09:22:45.856384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:02.165 [2024-10-15 09:22:45.856434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:02.165 [2024-10-15 09:22:45.856452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:02.165 [2024-10-15 09:22:45.857108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:02.165 [2024-10-15 09:22:45.857155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:02.165 [2024-10-15 09:22:45.857283] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:02.165 [2024-10-15 09:22:45.857331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:02.165 pt3 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.165 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:02.165 "name": "raid_bdev1", 00:23:02.165 "uuid": "6cde0e7a-8c36-42e7-967d-06ddaae028eb", 00:23:02.165 "strip_size_kb": 64, 00:23:02.165 "state": "configuring", 00:23:02.165 "raid_level": "raid5f", 00:23:02.165 "superblock": true, 00:23:02.165 "num_base_bdevs": 4, 00:23:02.166 "num_base_bdevs_discovered": 2, 00:23:02.166 "num_base_bdevs_operational": 3, 00:23:02.166 "base_bdevs_list": [ 00:23:02.166 { 00:23:02.166 "name": null, 00:23:02.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.166 "is_configured": false, 00:23:02.166 "data_offset": 2048, 00:23:02.166 "data_size": 63488 00:23:02.166 }, 00:23:02.166 { 00:23:02.166 "name": "pt2", 00:23:02.166 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:02.166 "is_configured": true, 00:23:02.166 "data_offset": 2048, 00:23:02.166 "data_size": 63488 00:23:02.166 }, 00:23:02.166 { 00:23:02.166 "name": "pt3", 00:23:02.166 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:02.166 "is_configured": true, 00:23:02.166 "data_offset": 2048, 00:23:02.166 "data_size": 63488 00:23:02.166 }, 00:23:02.166 { 00:23:02.166 "name": null, 00:23:02.166 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:02.166 "is_configured": false, 00:23:02.166 "data_offset": 2048, 00:23:02.166 "data_size": 63488 00:23:02.166 } 00:23:02.166 ] 00:23:02.166 }' 00:23:02.166 09:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:02.166 09:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.733 [2024-10-15 09:22:46.380310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:02.733 [2024-10-15 09:22:46.380552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:02.733 [2024-10-15 09:22:46.380714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:23:02.733 [2024-10-15 09:22:46.380839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:02.733 [2024-10-15 09:22:46.381565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:02.733 [2024-10-15 09:22:46.381743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:02.733 [2024-10-15 09:22:46.381883] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:02.733 [2024-10-15 09:22:46.381921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:02.733 [2024-10-15 09:22:46.382111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:02.733 [2024-10-15 09:22:46.382145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:02.733 [2024-10-15 09:22:46.382499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:02.733 pt4 00:23:02.733 [2024-10-15 09:22:46.389161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:02.733 [2024-10-15 09:22:46.389208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:02.733 [2024-10-15 09:22:46.389577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.733 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:02.733 "name": "raid_bdev1", 00:23:02.733 "uuid": "6cde0e7a-8c36-42e7-967d-06ddaae028eb", 00:23:02.733 "strip_size_kb": 64, 00:23:02.733 "state": "online", 00:23:02.733 "raid_level": "raid5f", 00:23:02.733 "superblock": true, 00:23:02.733 "num_base_bdevs": 4, 00:23:02.733 "num_base_bdevs_discovered": 3, 00:23:02.733 "num_base_bdevs_operational": 3, 00:23:02.733 "base_bdevs_list": [ 00:23:02.733 { 00:23:02.733 "name": null, 00:23:02.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.733 "is_configured": false, 00:23:02.733 "data_offset": 2048, 00:23:02.733 "data_size": 63488 00:23:02.733 }, 00:23:02.733 { 00:23:02.733 "name": "pt2", 00:23:02.733 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:02.733 "is_configured": true, 00:23:02.733 "data_offset": 2048, 00:23:02.734 "data_size": 63488 00:23:02.734 }, 00:23:02.734 { 00:23:02.734 "name": "pt3", 00:23:02.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:02.734 "is_configured": true, 00:23:02.734 "data_offset": 2048, 00:23:02.734 "data_size": 63488 00:23:02.734 }, 00:23:02.734 { 00:23:02.734 "name": "pt4", 00:23:02.734 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:02.734 "is_configured": true, 00:23:02.734 "data_offset": 2048, 00:23:02.734 "data_size": 63488 00:23:02.734 } 00:23:02.734 ] 00:23:02.734 }' 00:23:02.734 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:02.734 09:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.301 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:03.301 09:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.301 09:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.301 [2024-10-15 09:22:46.929624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:03.301 [2024-10-15 09:22:46.929825] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:03.301 [2024-10-15 09:22:46.929985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:03.301 [2024-10-15 09:22:46.930095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:03.301 [2024-10-15 09:22:46.930117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:03.301 09:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.301 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.301 09:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.301 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:03.301 09:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.301 09:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.301 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:03.301 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:03.302 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:23:03.302 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:23:03.302 09:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:23:03.302 09:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.302 09:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.302 [2024-10-15 09:22:47.005631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:03.302 [2024-10-15 09:22:47.005842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.302 [2024-10-15 09:22:47.005891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:23:03.302 [2024-10-15 09:22:47.005912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.302 [2024-10-15 09:22:47.009043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.302 pt1 00:23:03.302 [2024-10-15 09:22:47.009228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:03.302 [2024-10-15 09:22:47.009361] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:03.302 [2024-10-15 09:22:47.009440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:03.302 [2024-10-15 09:22:47.009618] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:03.302 [2024-10-15 09:22:47.009643] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:03.302 [2024-10-15 09:22:47.009666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:03.302 [2024-10-15 09:22:47.009742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:03.302 [2024-10-15 09:22:47.009950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:03.302 "name": "raid_bdev1", 00:23:03.302 "uuid": "6cde0e7a-8c36-42e7-967d-06ddaae028eb", 00:23:03.302 "strip_size_kb": 64, 00:23:03.302 "state": "configuring", 00:23:03.302 "raid_level": "raid5f", 00:23:03.302 "superblock": true, 00:23:03.302 "num_base_bdevs": 4, 00:23:03.302 "num_base_bdevs_discovered": 2, 00:23:03.302 "num_base_bdevs_operational": 3, 00:23:03.302 "base_bdevs_list": [ 00:23:03.302 { 00:23:03.302 "name": null, 00:23:03.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.302 "is_configured": false, 00:23:03.302 "data_offset": 2048, 00:23:03.302 "data_size": 63488 00:23:03.302 }, 00:23:03.302 { 00:23:03.302 "name": "pt2", 00:23:03.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:03.302 "is_configured": true, 00:23:03.302 "data_offset": 2048, 00:23:03.302 "data_size": 63488 00:23:03.302 }, 00:23:03.302 { 00:23:03.302 "name": "pt3", 00:23:03.302 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:03.302 "is_configured": true, 00:23:03.302 "data_offset": 2048, 00:23:03.302 "data_size": 63488 00:23:03.302 }, 00:23:03.302 { 00:23:03.302 "name": null, 00:23:03.302 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:03.302 "is_configured": false, 00:23:03.302 "data_offset": 2048, 00:23:03.302 "data_size": 63488 00:23:03.302 } 00:23:03.302 ] 00:23:03.302 }' 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:03.302 09:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.870 [2024-10-15 09:22:47.586018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:03.870 [2024-10-15 09:22:47.586315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.870 [2024-10-15 09:22:47.586380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:23:03.870 [2024-10-15 09:22:47.586405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.870 [2024-10-15 09:22:47.587077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.870 [2024-10-15 09:22:47.587103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:03.870 [2024-10-15 09:22:47.587388] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:03.870 [2024-10-15 09:22:47.587568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:03.870 [2024-10-15 09:22:47.587879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:03.870 [2024-10-15 09:22:47.588008] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:03.870 [2024-10-15 09:22:47.588425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:03.870 [2024-10-15 09:22:47.595328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:03.870 [2024-10-15 09:22:47.595510] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raidpt4 00:23:03.870 _bdev 0x617000008900 00:23:03.870 [2024-10-15 09:22:47.595995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:03.870 "name": "raid_bdev1", 00:23:03.870 "uuid": "6cde0e7a-8c36-42e7-967d-06ddaae028eb", 00:23:03.870 "strip_size_kb": 64, 00:23:03.870 "state": "online", 00:23:03.870 "raid_level": "raid5f", 00:23:03.870 "superblock": true, 00:23:03.870 "num_base_bdevs": 4, 00:23:03.870 "num_base_bdevs_discovered": 3, 00:23:03.870 "num_base_bdevs_operational": 3, 00:23:03.870 "base_bdevs_list": [ 00:23:03.870 { 00:23:03.870 "name": null, 00:23:03.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.870 "is_configured": false, 00:23:03.870 "data_offset": 2048, 00:23:03.870 "data_size": 63488 00:23:03.870 }, 00:23:03.870 { 00:23:03.870 "name": "pt2", 00:23:03.870 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:03.870 "is_configured": true, 00:23:03.870 "data_offset": 2048, 00:23:03.870 "data_size": 63488 00:23:03.870 }, 00:23:03.870 { 00:23:03.870 "name": "pt3", 00:23:03.870 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:03.870 "is_configured": true, 00:23:03.870 "data_offset": 2048, 00:23:03.870 "data_size": 63488 00:23:03.870 }, 00:23:03.870 { 00:23:03.870 "name": "pt4", 00:23:03.870 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:03.870 "is_configured": true, 00:23:03.870 "data_offset": 2048, 00:23:03.870 "data_size": 63488 00:23:03.870 } 00:23:03.870 ] 00:23:03.870 }' 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:03.870 09:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.443 [2024-10-15 09:22:48.228357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6cde0e7a-8c36-42e7-967d-06ddaae028eb '!=' 6cde0e7a-8c36-42e7-967d-06ddaae028eb ']' 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84830 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 84830 ']' 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 84830 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84830 00:23:04.443 killing process with pid 84830 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84830' 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 84830 00:23:04.443 [2024-10-15 09:22:48.321210] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:04.443 09:22:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 84830 00:23:04.443 [2024-10-15 09:22:48.321355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:04.443 [2024-10-15 09:22:48.321470] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:04.443 [2024-10-15 09:22:48.321492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:05.010 [2024-10-15 09:22:48.704560] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:05.944 09:22:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:23:05.944 00:23:05.944 real 0m9.679s 00:23:05.944 user 0m15.715s 00:23:05.944 sys 0m1.503s 00:23:05.944 ************************************ 00:23:05.944 END TEST raid5f_superblock_test 00:23:05.944 ************************************ 00:23:05.944 09:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:05.944 09:22:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.944 09:22:49 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:23:05.944 09:22:49 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:23:05.944 09:22:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:23:05.944 09:22:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:05.944 09:22:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:05.944 ************************************ 00:23:05.944 START TEST raid5f_rebuild_test 00:23:05.944 ************************************ 00:23:05.944 09:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:23:05.944 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:23:05.944 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:23:06.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85321 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85321 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 85321 ']' 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:06.202 09:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.202 [2024-10-15 09:22:49.979557] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:23:06.202 [2024-10-15 09:22:49.980016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:23:06.202 Zero copy mechanism will not be used. 00:23:06.202 -allocations --file-prefix=spdk_pid85321 ] 00:23:06.460 [2024-10-15 09:22:50.146550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.460 [2024-10-15 09:22:50.294455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.718 [2024-10-15 09:22:50.521455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:06.718 [2024-10-15 09:22:50.521759] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:07.295 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:07.295 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:23:07.295 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:07.295 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:07.295 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.296 BaseBdev1_malloc 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.296 [2024-10-15 09:22:51.110345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:07.296 [2024-10-15 09:22:51.110647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.296 [2024-10-15 09:22:51.110702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:07.296 [2024-10-15 09:22:51.110725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.296 BaseBdev1 00:23:07.296 [2024-10-15 09:22:51.113922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.296 [2024-10-15 09:22:51.113980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.296 BaseBdev2_malloc 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.296 [2024-10-15 09:22:51.169940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:07.296 [2024-10-15 09:22:51.170179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.296 [2024-10-15 09:22:51.170401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:07.296 [2024-10-15 09:22:51.170573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.296 [2024-10-15 09:22:51.173526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.296 [2024-10-15 09:22:51.173707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:07.296 BaseBdev2 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.296 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.561 BaseBdev3_malloc 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.561 [2024-10-15 09:22:51.236709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:07.561 [2024-10-15 09:22:51.237054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.561 [2024-10-15 09:22:51.237130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:07.561 [2024-10-15 09:22:51.237156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.561 BaseBdev3 00:23:07.561 [2024-10-15 09:22:51.240631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.561 [2024-10-15 09:22:51.240686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.561 BaseBdev4_malloc 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.561 [2024-10-15 09:22:51.292730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:07.561 [2024-10-15 09:22:51.292829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.561 [2024-10-15 09:22:51.292876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:07.561 [2024-10-15 09:22:51.292899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.561 [2024-10-15 09:22:51.295971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.561 BaseBdev4 00:23:07.561 [2024-10-15 09:22:51.296168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.561 spare_malloc 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.561 spare_delay 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.561 [2024-10-15 09:22:51.356503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:07.561 [2024-10-15 09:22:51.356585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.561 [2024-10-15 09:22:51.356619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:07.561 [2024-10-15 09:22:51.356638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.561 [2024-10-15 09:22:51.359627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.561 spare 00:23:07.561 [2024-10-15 09:22:51.359811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.561 [2024-10-15 09:22:51.364725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:07.561 [2024-10-15 09:22:51.367350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:07.561 [2024-10-15 09:22:51.367450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:07.561 [2024-10-15 09:22:51.367537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:07.561 [2024-10-15 09:22:51.367686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:07.561 [2024-10-15 09:22:51.367714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:07.561 [2024-10-15 09:22:51.368072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:07.561 [2024-10-15 09:22:51.374955] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:07.561 [2024-10-15 09:22:51.375105] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:07.561 [2024-10-15 09:22:51.375435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:07.561 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:07.562 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:07.562 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:07.562 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:07.562 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.562 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.562 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.562 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.562 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.562 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:07.562 "name": "raid_bdev1", 00:23:07.562 "uuid": "733cd81c-7ff6-46a5-b8c5-19b2f5362197", 00:23:07.562 "strip_size_kb": 64, 00:23:07.562 "state": "online", 00:23:07.562 "raid_level": "raid5f", 00:23:07.562 "superblock": false, 00:23:07.562 "num_base_bdevs": 4, 00:23:07.562 "num_base_bdevs_discovered": 4, 00:23:07.562 "num_base_bdevs_operational": 4, 00:23:07.562 "base_bdevs_list": [ 00:23:07.562 { 00:23:07.562 "name": "BaseBdev1", 00:23:07.562 "uuid": "1d6b0495-55da-59ac-8086-dffa57c1c8ba", 00:23:07.562 "is_configured": true, 00:23:07.562 "data_offset": 0, 00:23:07.562 "data_size": 65536 00:23:07.562 }, 00:23:07.562 { 00:23:07.562 "name": "BaseBdev2", 00:23:07.562 "uuid": "bd2ed000-2d1b-5756-a2eb-898f8afc1f89", 00:23:07.562 "is_configured": true, 00:23:07.562 "data_offset": 0, 00:23:07.562 "data_size": 65536 00:23:07.562 }, 00:23:07.562 { 00:23:07.562 "name": "BaseBdev3", 00:23:07.562 "uuid": "0c844a61-fb73-5c33-ac19-0724870485c3", 00:23:07.562 "is_configured": true, 00:23:07.562 "data_offset": 0, 00:23:07.562 "data_size": 65536 00:23:07.562 }, 00:23:07.562 { 00:23:07.562 "name": "BaseBdev4", 00:23:07.562 "uuid": "fcb1706f-23b4-5cf6-b04c-845d490388f4", 00:23:07.562 "is_configured": true, 00:23:07.562 "data_offset": 0, 00:23:07.562 "data_size": 65536 00:23:07.562 } 00:23:07.562 ] 00:23:07.562 }' 00:23:07.562 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:07.562 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.129 [2024-10-15 09:22:51.823836] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:08.129 09:22:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:08.388 [2024-10-15 09:22:52.207754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:08.388 /dev/nbd0 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:08.388 1+0 records in 00:23:08.388 1+0 records out 00:23:08.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312614 s, 13.1 MB/s 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:23:08.388 09:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:23:08.957 512+0 records in 00:23:08.957 512+0 records out 00:23:08.957 100663296 bytes (101 MB, 96 MiB) copied, 0.607475 s, 166 MB/s 00:23:08.957 09:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:08.957 09:22:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:08.957 09:22:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:08.957 09:22:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:09.216 09:22:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:09.216 09:22:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:09.216 09:22:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:09.478 [2024-10-15 09:22:53.156917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.478 [2024-10-15 09:22:53.177224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:09.478 "name": "raid_bdev1", 00:23:09.478 "uuid": "733cd81c-7ff6-46a5-b8c5-19b2f5362197", 00:23:09.478 "strip_size_kb": 64, 00:23:09.478 "state": "online", 00:23:09.478 "raid_level": "raid5f", 00:23:09.478 "superblock": false, 00:23:09.478 "num_base_bdevs": 4, 00:23:09.478 "num_base_bdevs_discovered": 3, 00:23:09.478 "num_base_bdevs_operational": 3, 00:23:09.478 "base_bdevs_list": [ 00:23:09.478 { 00:23:09.478 "name": null, 00:23:09.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.478 "is_configured": false, 00:23:09.478 "data_offset": 0, 00:23:09.478 "data_size": 65536 00:23:09.478 }, 00:23:09.478 { 00:23:09.478 "name": "BaseBdev2", 00:23:09.478 "uuid": "bd2ed000-2d1b-5756-a2eb-898f8afc1f89", 00:23:09.478 "is_configured": true, 00:23:09.478 "data_offset": 0, 00:23:09.478 "data_size": 65536 00:23:09.478 }, 00:23:09.478 { 00:23:09.478 "name": "BaseBdev3", 00:23:09.478 "uuid": "0c844a61-fb73-5c33-ac19-0724870485c3", 00:23:09.478 "is_configured": true, 00:23:09.478 "data_offset": 0, 00:23:09.478 "data_size": 65536 00:23:09.478 }, 00:23:09.478 { 00:23:09.478 "name": "BaseBdev4", 00:23:09.478 "uuid": "fcb1706f-23b4-5cf6-b04c-845d490388f4", 00:23:09.478 "is_configured": true, 00:23:09.478 "data_offset": 0, 00:23:09.478 "data_size": 65536 00:23:09.478 } 00:23:09.478 ] 00:23:09.478 }' 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:09.478 09:22:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.046 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:10.046 09:22:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.046 09:22:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.046 [2024-10-15 09:22:53.693360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:10.046 [2024-10-15 09:22:53.708167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:23:10.046 09:22:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.046 09:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:10.046 [2024-10-15 09:22:53.717425] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:10.981 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:10.981 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:10.981 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:10.981 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:10.981 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:10.981 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.981 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.981 09:22:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.981 09:22:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.981 09:22:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.981 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:10.981 "name": "raid_bdev1", 00:23:10.981 "uuid": "733cd81c-7ff6-46a5-b8c5-19b2f5362197", 00:23:10.981 "strip_size_kb": 64, 00:23:10.981 "state": "online", 00:23:10.981 "raid_level": "raid5f", 00:23:10.981 "superblock": false, 00:23:10.981 "num_base_bdevs": 4, 00:23:10.981 "num_base_bdevs_discovered": 4, 00:23:10.981 "num_base_bdevs_operational": 4, 00:23:10.981 "process": { 00:23:10.981 "type": "rebuild", 00:23:10.981 "target": "spare", 00:23:10.981 "progress": { 00:23:10.981 "blocks": 17280, 00:23:10.981 "percent": 8 00:23:10.981 } 00:23:10.981 }, 00:23:10.981 "base_bdevs_list": [ 00:23:10.981 { 00:23:10.981 "name": "spare", 00:23:10.981 "uuid": "7a9e3423-d7cf-5571-bd8b-6e1ddbc77cf8", 00:23:10.981 "is_configured": true, 00:23:10.981 "data_offset": 0, 00:23:10.981 "data_size": 65536 00:23:10.981 }, 00:23:10.981 { 00:23:10.981 "name": "BaseBdev2", 00:23:10.981 "uuid": "bd2ed000-2d1b-5756-a2eb-898f8afc1f89", 00:23:10.981 "is_configured": true, 00:23:10.981 "data_offset": 0, 00:23:10.981 "data_size": 65536 00:23:10.981 }, 00:23:10.981 { 00:23:10.981 "name": "BaseBdev3", 00:23:10.981 "uuid": "0c844a61-fb73-5c33-ac19-0724870485c3", 00:23:10.981 "is_configured": true, 00:23:10.981 "data_offset": 0, 00:23:10.981 "data_size": 65536 00:23:10.981 }, 00:23:10.981 { 00:23:10.981 "name": "BaseBdev4", 00:23:10.981 "uuid": "fcb1706f-23b4-5cf6-b04c-845d490388f4", 00:23:10.981 "is_configured": true, 00:23:10.981 "data_offset": 0, 00:23:10.981 "data_size": 65536 00:23:10.981 } 00:23:10.981 ] 00:23:10.981 }' 00:23:10.981 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:10.981 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:10.981 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:10.981 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:10.981 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:10.981 09:22:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.981 09:22:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.981 [2024-10-15 09:22:54.880043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:11.239 [2024-10-15 09:22:54.933718] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:11.239 [2024-10-15 09:22:54.933862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:11.239 [2024-10-15 09:22:54.933893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:11.240 [2024-10-15 09:22:54.933924] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:11.240 09:22:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.240 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:11.240 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:11.240 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:11.240 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:11.240 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:11.240 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:11.240 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:11.240 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:11.240 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:11.240 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:11.240 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.240 09:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.240 09:22:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.240 09:22:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.240 09:22:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.240 09:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:11.240 "name": "raid_bdev1", 00:23:11.240 "uuid": "733cd81c-7ff6-46a5-b8c5-19b2f5362197", 00:23:11.240 "strip_size_kb": 64, 00:23:11.240 "state": "online", 00:23:11.240 "raid_level": "raid5f", 00:23:11.240 "superblock": false, 00:23:11.240 "num_base_bdevs": 4, 00:23:11.240 "num_base_bdevs_discovered": 3, 00:23:11.240 "num_base_bdevs_operational": 3, 00:23:11.240 "base_bdevs_list": [ 00:23:11.240 { 00:23:11.240 "name": null, 00:23:11.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.240 "is_configured": false, 00:23:11.240 "data_offset": 0, 00:23:11.240 "data_size": 65536 00:23:11.240 }, 00:23:11.240 { 00:23:11.240 "name": "BaseBdev2", 00:23:11.240 "uuid": "bd2ed000-2d1b-5756-a2eb-898f8afc1f89", 00:23:11.240 "is_configured": true, 00:23:11.240 "data_offset": 0, 00:23:11.240 "data_size": 65536 00:23:11.240 }, 00:23:11.240 { 00:23:11.240 "name": "BaseBdev3", 00:23:11.240 "uuid": "0c844a61-fb73-5c33-ac19-0724870485c3", 00:23:11.240 "is_configured": true, 00:23:11.240 "data_offset": 0, 00:23:11.240 "data_size": 65536 00:23:11.240 }, 00:23:11.240 { 00:23:11.240 "name": "BaseBdev4", 00:23:11.240 "uuid": "fcb1706f-23b4-5cf6-b04c-845d490388f4", 00:23:11.240 "is_configured": true, 00:23:11.240 "data_offset": 0, 00:23:11.240 "data_size": 65536 00:23:11.240 } 00:23:11.240 ] 00:23:11.240 }' 00:23:11.240 09:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:11.240 09:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.806 09:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:11.806 09:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:11.806 09:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:11.806 09:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:11.806 09:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:11.806 09:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.806 09:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.807 09:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.807 09:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.807 09:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.807 09:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:11.807 "name": "raid_bdev1", 00:23:11.807 "uuid": "733cd81c-7ff6-46a5-b8c5-19b2f5362197", 00:23:11.807 "strip_size_kb": 64, 00:23:11.807 "state": "online", 00:23:11.807 "raid_level": "raid5f", 00:23:11.807 "superblock": false, 00:23:11.807 "num_base_bdevs": 4, 00:23:11.807 "num_base_bdevs_discovered": 3, 00:23:11.807 "num_base_bdevs_operational": 3, 00:23:11.807 "base_bdevs_list": [ 00:23:11.807 { 00:23:11.807 "name": null, 00:23:11.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.807 "is_configured": false, 00:23:11.807 "data_offset": 0, 00:23:11.807 "data_size": 65536 00:23:11.807 }, 00:23:11.807 { 00:23:11.807 "name": "BaseBdev2", 00:23:11.807 "uuid": "bd2ed000-2d1b-5756-a2eb-898f8afc1f89", 00:23:11.807 "is_configured": true, 00:23:11.807 "data_offset": 0, 00:23:11.807 "data_size": 65536 00:23:11.807 }, 00:23:11.807 { 00:23:11.807 "name": "BaseBdev3", 00:23:11.807 "uuid": "0c844a61-fb73-5c33-ac19-0724870485c3", 00:23:11.807 "is_configured": true, 00:23:11.807 "data_offset": 0, 00:23:11.807 "data_size": 65536 00:23:11.807 }, 00:23:11.807 { 00:23:11.807 "name": "BaseBdev4", 00:23:11.807 "uuid": "fcb1706f-23b4-5cf6-b04c-845d490388f4", 00:23:11.807 "is_configured": true, 00:23:11.807 "data_offset": 0, 00:23:11.807 "data_size": 65536 00:23:11.807 } 00:23:11.807 ] 00:23:11.807 }' 00:23:11.807 09:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:11.807 09:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:11.807 09:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:11.807 09:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:11.807 09:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:11.807 09:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.807 09:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.807 [2024-10-15 09:22:55.660900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:11.807 [2024-10-15 09:22:55.674958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:23:11.807 09:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.807 09:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:11.807 [2024-10-15 09:22:55.684219] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:13.184 "name": "raid_bdev1", 00:23:13.184 "uuid": "733cd81c-7ff6-46a5-b8c5-19b2f5362197", 00:23:13.184 "strip_size_kb": 64, 00:23:13.184 "state": "online", 00:23:13.184 "raid_level": "raid5f", 00:23:13.184 "superblock": false, 00:23:13.184 "num_base_bdevs": 4, 00:23:13.184 "num_base_bdevs_discovered": 4, 00:23:13.184 "num_base_bdevs_operational": 4, 00:23:13.184 "process": { 00:23:13.184 "type": "rebuild", 00:23:13.184 "target": "spare", 00:23:13.184 "progress": { 00:23:13.184 "blocks": 17280, 00:23:13.184 "percent": 8 00:23:13.184 } 00:23:13.184 }, 00:23:13.184 "base_bdevs_list": [ 00:23:13.184 { 00:23:13.184 "name": "spare", 00:23:13.184 "uuid": "7a9e3423-d7cf-5571-bd8b-6e1ddbc77cf8", 00:23:13.184 "is_configured": true, 00:23:13.184 "data_offset": 0, 00:23:13.184 "data_size": 65536 00:23:13.184 }, 00:23:13.184 { 00:23:13.184 "name": "BaseBdev2", 00:23:13.184 "uuid": "bd2ed000-2d1b-5756-a2eb-898f8afc1f89", 00:23:13.184 "is_configured": true, 00:23:13.184 "data_offset": 0, 00:23:13.184 "data_size": 65536 00:23:13.184 }, 00:23:13.184 { 00:23:13.184 "name": "BaseBdev3", 00:23:13.184 "uuid": "0c844a61-fb73-5c33-ac19-0724870485c3", 00:23:13.184 "is_configured": true, 00:23:13.184 "data_offset": 0, 00:23:13.184 "data_size": 65536 00:23:13.184 }, 00:23:13.184 { 00:23:13.184 "name": "BaseBdev4", 00:23:13.184 "uuid": "fcb1706f-23b4-5cf6-b04c-845d490388f4", 00:23:13.184 "is_configured": true, 00:23:13.184 "data_offset": 0, 00:23:13.184 "data_size": 65536 00:23:13.184 } 00:23:13.184 ] 00:23:13.184 }' 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=687 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.184 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:13.185 "name": "raid_bdev1", 00:23:13.185 "uuid": "733cd81c-7ff6-46a5-b8c5-19b2f5362197", 00:23:13.185 "strip_size_kb": 64, 00:23:13.185 "state": "online", 00:23:13.185 "raid_level": "raid5f", 00:23:13.185 "superblock": false, 00:23:13.185 "num_base_bdevs": 4, 00:23:13.185 "num_base_bdevs_discovered": 4, 00:23:13.185 "num_base_bdevs_operational": 4, 00:23:13.185 "process": { 00:23:13.185 "type": "rebuild", 00:23:13.185 "target": "spare", 00:23:13.185 "progress": { 00:23:13.185 "blocks": 21120, 00:23:13.185 "percent": 10 00:23:13.185 } 00:23:13.185 }, 00:23:13.185 "base_bdevs_list": [ 00:23:13.185 { 00:23:13.185 "name": "spare", 00:23:13.185 "uuid": "7a9e3423-d7cf-5571-bd8b-6e1ddbc77cf8", 00:23:13.185 "is_configured": true, 00:23:13.185 "data_offset": 0, 00:23:13.185 "data_size": 65536 00:23:13.185 }, 00:23:13.185 { 00:23:13.185 "name": "BaseBdev2", 00:23:13.185 "uuid": "bd2ed000-2d1b-5756-a2eb-898f8afc1f89", 00:23:13.185 "is_configured": true, 00:23:13.185 "data_offset": 0, 00:23:13.185 "data_size": 65536 00:23:13.185 }, 00:23:13.185 { 00:23:13.185 "name": "BaseBdev3", 00:23:13.185 "uuid": "0c844a61-fb73-5c33-ac19-0724870485c3", 00:23:13.185 "is_configured": true, 00:23:13.185 "data_offset": 0, 00:23:13.185 "data_size": 65536 00:23:13.185 }, 00:23:13.185 { 00:23:13.185 "name": "BaseBdev4", 00:23:13.185 "uuid": "fcb1706f-23b4-5cf6-b04c-845d490388f4", 00:23:13.185 "is_configured": true, 00:23:13.185 "data_offset": 0, 00:23:13.185 "data_size": 65536 00:23:13.185 } 00:23:13.185 ] 00:23:13.185 }' 00:23:13.185 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:13.185 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:13.185 09:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:13.185 09:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:13.185 09:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:14.120 09:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:14.120 09:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:14.120 09:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:14.120 09:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:14.120 09:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:14.120 09:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:14.120 09:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.120 09:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.120 09:22:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.120 09:22:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.120 09:22:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.378 09:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:14.378 "name": "raid_bdev1", 00:23:14.378 "uuid": "733cd81c-7ff6-46a5-b8c5-19b2f5362197", 00:23:14.378 "strip_size_kb": 64, 00:23:14.378 "state": "online", 00:23:14.378 "raid_level": "raid5f", 00:23:14.378 "superblock": false, 00:23:14.378 "num_base_bdevs": 4, 00:23:14.378 "num_base_bdevs_discovered": 4, 00:23:14.378 "num_base_bdevs_operational": 4, 00:23:14.378 "process": { 00:23:14.378 "type": "rebuild", 00:23:14.378 "target": "spare", 00:23:14.378 "progress": { 00:23:14.378 "blocks": 44160, 00:23:14.378 "percent": 22 00:23:14.378 } 00:23:14.378 }, 00:23:14.378 "base_bdevs_list": [ 00:23:14.378 { 00:23:14.378 "name": "spare", 00:23:14.378 "uuid": "7a9e3423-d7cf-5571-bd8b-6e1ddbc77cf8", 00:23:14.378 "is_configured": true, 00:23:14.378 "data_offset": 0, 00:23:14.378 "data_size": 65536 00:23:14.378 }, 00:23:14.378 { 00:23:14.378 "name": "BaseBdev2", 00:23:14.378 "uuid": "bd2ed000-2d1b-5756-a2eb-898f8afc1f89", 00:23:14.378 "is_configured": true, 00:23:14.378 "data_offset": 0, 00:23:14.378 "data_size": 65536 00:23:14.378 }, 00:23:14.378 { 00:23:14.378 "name": "BaseBdev3", 00:23:14.378 "uuid": "0c844a61-fb73-5c33-ac19-0724870485c3", 00:23:14.378 "is_configured": true, 00:23:14.378 "data_offset": 0, 00:23:14.378 "data_size": 65536 00:23:14.378 }, 00:23:14.378 { 00:23:14.378 "name": "BaseBdev4", 00:23:14.378 "uuid": "fcb1706f-23b4-5cf6-b04c-845d490388f4", 00:23:14.378 "is_configured": true, 00:23:14.378 "data_offset": 0, 00:23:14.378 "data_size": 65536 00:23:14.378 } 00:23:14.378 ] 00:23:14.378 }' 00:23:14.378 09:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:14.378 09:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:14.378 09:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:14.378 09:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:14.378 09:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:15.314 09:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:15.314 09:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:15.314 09:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:15.314 09:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:15.314 09:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:15.314 09:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:15.314 09:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.314 09:22:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.314 09:22:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.314 09:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.314 09:22:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.572 09:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:15.572 "name": "raid_bdev1", 00:23:15.572 "uuid": "733cd81c-7ff6-46a5-b8c5-19b2f5362197", 00:23:15.572 "strip_size_kb": 64, 00:23:15.572 "state": "online", 00:23:15.572 "raid_level": "raid5f", 00:23:15.572 "superblock": false, 00:23:15.572 "num_base_bdevs": 4, 00:23:15.572 "num_base_bdevs_discovered": 4, 00:23:15.572 "num_base_bdevs_operational": 4, 00:23:15.572 "process": { 00:23:15.572 "type": "rebuild", 00:23:15.572 "target": "spare", 00:23:15.572 "progress": { 00:23:15.572 "blocks": 65280, 00:23:15.572 "percent": 33 00:23:15.572 } 00:23:15.572 }, 00:23:15.572 "base_bdevs_list": [ 00:23:15.572 { 00:23:15.572 "name": "spare", 00:23:15.572 "uuid": "7a9e3423-d7cf-5571-bd8b-6e1ddbc77cf8", 00:23:15.572 "is_configured": true, 00:23:15.572 "data_offset": 0, 00:23:15.572 "data_size": 65536 00:23:15.572 }, 00:23:15.572 { 00:23:15.572 "name": "BaseBdev2", 00:23:15.572 "uuid": "bd2ed000-2d1b-5756-a2eb-898f8afc1f89", 00:23:15.572 "is_configured": true, 00:23:15.572 "data_offset": 0, 00:23:15.572 "data_size": 65536 00:23:15.572 }, 00:23:15.572 { 00:23:15.572 "name": "BaseBdev3", 00:23:15.572 "uuid": "0c844a61-fb73-5c33-ac19-0724870485c3", 00:23:15.572 "is_configured": true, 00:23:15.572 "data_offset": 0, 00:23:15.572 "data_size": 65536 00:23:15.572 }, 00:23:15.572 { 00:23:15.572 "name": "BaseBdev4", 00:23:15.572 "uuid": "fcb1706f-23b4-5cf6-b04c-845d490388f4", 00:23:15.572 "is_configured": true, 00:23:15.572 "data_offset": 0, 00:23:15.572 "data_size": 65536 00:23:15.572 } 00:23:15.572 ] 00:23:15.572 }' 00:23:15.572 09:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:15.572 09:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:15.572 09:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:15.572 09:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:15.572 09:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:16.532 09:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:16.532 09:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:16.532 09:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:16.532 09:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:16.532 09:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:16.532 09:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:16.532 09:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.532 09:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.532 09:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.532 09:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.532 09:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.532 09:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:16.532 "name": "raid_bdev1", 00:23:16.532 "uuid": "733cd81c-7ff6-46a5-b8c5-19b2f5362197", 00:23:16.532 "strip_size_kb": 64, 00:23:16.532 "state": "online", 00:23:16.532 "raid_level": "raid5f", 00:23:16.532 "superblock": false, 00:23:16.532 "num_base_bdevs": 4, 00:23:16.532 "num_base_bdevs_discovered": 4, 00:23:16.532 "num_base_bdevs_operational": 4, 00:23:16.532 "process": { 00:23:16.532 "type": "rebuild", 00:23:16.533 "target": "spare", 00:23:16.533 "progress": { 00:23:16.533 "blocks": 88320, 00:23:16.533 "percent": 44 00:23:16.533 } 00:23:16.533 }, 00:23:16.533 "base_bdevs_list": [ 00:23:16.533 { 00:23:16.533 "name": "spare", 00:23:16.533 "uuid": "7a9e3423-d7cf-5571-bd8b-6e1ddbc77cf8", 00:23:16.533 "is_configured": true, 00:23:16.533 "data_offset": 0, 00:23:16.533 "data_size": 65536 00:23:16.533 }, 00:23:16.533 { 00:23:16.533 "name": "BaseBdev2", 00:23:16.533 "uuid": "bd2ed000-2d1b-5756-a2eb-898f8afc1f89", 00:23:16.533 "is_configured": true, 00:23:16.533 "data_offset": 0, 00:23:16.533 "data_size": 65536 00:23:16.533 }, 00:23:16.533 { 00:23:16.533 "name": "BaseBdev3", 00:23:16.533 "uuid": "0c844a61-fb73-5c33-ac19-0724870485c3", 00:23:16.533 "is_configured": true, 00:23:16.533 "data_offset": 0, 00:23:16.533 "data_size": 65536 00:23:16.533 }, 00:23:16.533 { 00:23:16.533 "name": "BaseBdev4", 00:23:16.533 "uuid": "fcb1706f-23b4-5cf6-b04c-845d490388f4", 00:23:16.533 "is_configured": true, 00:23:16.533 "data_offset": 0, 00:23:16.533 "data_size": 65536 00:23:16.533 } 00:23:16.533 ] 00:23:16.533 }' 00:23:16.533 09:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:16.533 09:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:16.791 09:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:16.791 09:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:16.791 09:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:17.725 09:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:17.725 09:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:17.726 09:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:17.726 09:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:17.726 09:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:17.726 09:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:17.726 09:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.726 09:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.726 09:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.726 09:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.726 09:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.726 09:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:17.726 "name": "raid_bdev1", 00:23:17.726 "uuid": "733cd81c-7ff6-46a5-b8c5-19b2f5362197", 00:23:17.726 "strip_size_kb": 64, 00:23:17.726 "state": "online", 00:23:17.726 "raid_level": "raid5f", 00:23:17.726 "superblock": false, 00:23:17.726 "num_base_bdevs": 4, 00:23:17.726 "num_base_bdevs_discovered": 4, 00:23:17.726 "num_base_bdevs_operational": 4, 00:23:17.726 "process": { 00:23:17.726 "type": "rebuild", 00:23:17.726 "target": "spare", 00:23:17.726 "progress": { 00:23:17.726 "blocks": 109440, 00:23:17.726 "percent": 55 00:23:17.726 } 00:23:17.726 }, 00:23:17.726 "base_bdevs_list": [ 00:23:17.726 { 00:23:17.726 "name": "spare", 00:23:17.726 "uuid": "7a9e3423-d7cf-5571-bd8b-6e1ddbc77cf8", 00:23:17.726 "is_configured": true, 00:23:17.726 "data_offset": 0, 00:23:17.726 "data_size": 65536 00:23:17.726 }, 00:23:17.726 { 00:23:17.726 "name": "BaseBdev2", 00:23:17.726 "uuid": "bd2ed000-2d1b-5756-a2eb-898f8afc1f89", 00:23:17.726 "is_configured": true, 00:23:17.726 "data_offset": 0, 00:23:17.726 "data_size": 65536 00:23:17.726 }, 00:23:17.726 { 00:23:17.726 "name": "BaseBdev3", 00:23:17.726 "uuid": "0c844a61-fb73-5c33-ac19-0724870485c3", 00:23:17.726 "is_configured": true, 00:23:17.726 "data_offset": 0, 00:23:17.726 "data_size": 65536 00:23:17.726 }, 00:23:17.726 { 00:23:17.726 "name": "BaseBdev4", 00:23:17.726 "uuid": "fcb1706f-23b4-5cf6-b04c-845d490388f4", 00:23:17.726 "is_configured": true, 00:23:17.726 "data_offset": 0, 00:23:17.726 "data_size": 65536 00:23:17.726 } 00:23:17.726 ] 00:23:17.726 }' 00:23:17.726 09:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:17.726 09:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:17.726 09:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:17.985 09:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:17.985 09:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:18.998 09:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:18.998 09:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.998 09:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:18.998 09:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:18.998 09:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:18.998 09:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:18.998 09:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.998 09:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.998 09:23:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.998 09:23:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.998 09:23:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.998 09:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:18.998 "name": "raid_bdev1", 00:23:18.998 "uuid": "733cd81c-7ff6-46a5-b8c5-19b2f5362197", 00:23:18.998 "strip_size_kb": 64, 00:23:18.998 "state": "online", 00:23:18.998 "raid_level": "raid5f", 00:23:18.998 "superblock": false, 00:23:18.998 "num_base_bdevs": 4, 00:23:18.998 "num_base_bdevs_discovered": 4, 00:23:18.998 "num_base_bdevs_operational": 4, 00:23:18.998 "process": { 00:23:18.998 "type": "rebuild", 00:23:18.998 "target": "spare", 00:23:18.998 "progress": { 00:23:18.998 "blocks": 132480, 00:23:18.998 "percent": 67 00:23:18.998 } 00:23:18.998 }, 00:23:18.998 "base_bdevs_list": [ 00:23:18.998 { 00:23:18.999 "name": "spare", 00:23:18.999 "uuid": "7a9e3423-d7cf-5571-bd8b-6e1ddbc77cf8", 00:23:18.999 "is_configured": true, 00:23:18.999 "data_offset": 0, 00:23:18.999 "data_size": 65536 00:23:18.999 }, 00:23:18.999 { 00:23:18.999 "name": "BaseBdev2", 00:23:18.999 "uuid": "bd2ed000-2d1b-5756-a2eb-898f8afc1f89", 00:23:18.999 "is_configured": true, 00:23:18.999 "data_offset": 0, 00:23:18.999 "data_size": 65536 00:23:18.999 }, 00:23:18.999 { 00:23:18.999 "name": "BaseBdev3", 00:23:18.999 "uuid": "0c844a61-fb73-5c33-ac19-0724870485c3", 00:23:18.999 "is_configured": true, 00:23:18.999 "data_offset": 0, 00:23:18.999 "data_size": 65536 00:23:18.999 }, 00:23:18.999 { 00:23:18.999 "name": "BaseBdev4", 00:23:18.999 "uuid": "fcb1706f-23b4-5cf6-b04c-845d490388f4", 00:23:18.999 "is_configured": true, 00:23:18.999 "data_offset": 0, 00:23:18.999 "data_size": 65536 00:23:18.999 } 00:23:18.999 ] 00:23:18.999 }' 00:23:18.999 09:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:18.999 09:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:18.999 09:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:18.999 09:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:18.999 09:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:19.934 09:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:19.934 09:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:19.934 09:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:19.934 09:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:19.934 09:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:19.934 09:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:19.934 09:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.934 09:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.934 09:23:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.934 09:23:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.192 09:23:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.192 09:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:20.192 "name": "raid_bdev1", 00:23:20.192 "uuid": "733cd81c-7ff6-46a5-b8c5-19b2f5362197", 00:23:20.192 "strip_size_kb": 64, 00:23:20.192 "state": "online", 00:23:20.192 "raid_level": "raid5f", 00:23:20.192 "superblock": false, 00:23:20.192 "num_base_bdevs": 4, 00:23:20.192 "num_base_bdevs_discovered": 4, 00:23:20.192 "num_base_bdevs_operational": 4, 00:23:20.192 "process": { 00:23:20.192 "type": "rebuild", 00:23:20.192 "target": "spare", 00:23:20.192 "progress": { 00:23:20.192 "blocks": 153600, 00:23:20.192 "percent": 78 00:23:20.192 } 00:23:20.192 }, 00:23:20.192 "base_bdevs_list": [ 00:23:20.192 { 00:23:20.193 "name": "spare", 00:23:20.193 "uuid": "7a9e3423-d7cf-5571-bd8b-6e1ddbc77cf8", 00:23:20.193 "is_configured": true, 00:23:20.193 "data_offset": 0, 00:23:20.193 "data_size": 65536 00:23:20.193 }, 00:23:20.193 { 00:23:20.193 "name": "BaseBdev2", 00:23:20.193 "uuid": "bd2ed000-2d1b-5756-a2eb-898f8afc1f89", 00:23:20.193 "is_configured": true, 00:23:20.193 "data_offset": 0, 00:23:20.193 "data_size": 65536 00:23:20.193 }, 00:23:20.193 { 00:23:20.193 "name": "BaseBdev3", 00:23:20.193 "uuid": "0c844a61-fb73-5c33-ac19-0724870485c3", 00:23:20.193 "is_configured": true, 00:23:20.193 "data_offset": 0, 00:23:20.193 "data_size": 65536 00:23:20.193 }, 00:23:20.193 { 00:23:20.193 "name": "BaseBdev4", 00:23:20.193 "uuid": "fcb1706f-23b4-5cf6-b04c-845d490388f4", 00:23:20.193 "is_configured": true, 00:23:20.193 "data_offset": 0, 00:23:20.193 "data_size": 65536 00:23:20.193 } 00:23:20.193 ] 00:23:20.193 }' 00:23:20.193 09:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:20.193 09:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:20.193 09:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:20.193 09:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:20.193 09:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:21.127 09:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:21.127 09:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:21.127 09:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:21.127 09:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:21.127 09:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:21.127 09:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:21.127 09:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.127 09:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.127 09:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.127 09:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.127 09:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.385 09:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:21.385 "name": "raid_bdev1", 00:23:21.386 "uuid": "733cd81c-7ff6-46a5-b8c5-19b2f5362197", 00:23:21.386 "strip_size_kb": 64, 00:23:21.386 "state": "online", 00:23:21.386 "raid_level": "raid5f", 00:23:21.386 "superblock": false, 00:23:21.386 "num_base_bdevs": 4, 00:23:21.386 "num_base_bdevs_discovered": 4, 00:23:21.386 "num_base_bdevs_operational": 4, 00:23:21.386 "process": { 00:23:21.386 "type": "rebuild", 00:23:21.386 "target": "spare", 00:23:21.386 "progress": { 00:23:21.386 "blocks": 176640, 00:23:21.386 "percent": 89 00:23:21.386 } 00:23:21.386 }, 00:23:21.386 "base_bdevs_list": [ 00:23:21.386 { 00:23:21.386 "name": "spare", 00:23:21.386 "uuid": "7a9e3423-d7cf-5571-bd8b-6e1ddbc77cf8", 00:23:21.386 "is_configured": true, 00:23:21.386 "data_offset": 0, 00:23:21.386 "data_size": 65536 00:23:21.386 }, 00:23:21.386 { 00:23:21.386 "name": "BaseBdev2", 00:23:21.386 "uuid": "bd2ed000-2d1b-5756-a2eb-898f8afc1f89", 00:23:21.386 "is_configured": true, 00:23:21.386 "data_offset": 0, 00:23:21.386 "data_size": 65536 00:23:21.386 }, 00:23:21.386 { 00:23:21.386 "name": "BaseBdev3", 00:23:21.386 "uuid": "0c844a61-fb73-5c33-ac19-0724870485c3", 00:23:21.386 "is_configured": true, 00:23:21.386 "data_offset": 0, 00:23:21.386 "data_size": 65536 00:23:21.386 }, 00:23:21.386 { 00:23:21.386 "name": "BaseBdev4", 00:23:21.386 "uuid": "fcb1706f-23b4-5cf6-b04c-845d490388f4", 00:23:21.386 "is_configured": true, 00:23:21.386 "data_offset": 0, 00:23:21.386 "data_size": 65536 00:23:21.386 } 00:23:21.386 ] 00:23:21.386 }' 00:23:21.386 09:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:21.386 09:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:21.386 09:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:21.386 09:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:21.386 09:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:22.330 [2024-10-15 09:23:06.120723] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:22.330 [2024-10-15 09:23:06.120858] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:22.330 [2024-10-15 09:23:06.120945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.330 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:22.330 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:22.330 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:22.330 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:22.330 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:22.330 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:22.330 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.330 09:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.330 09:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.330 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.330 09:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.330 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:22.330 "name": "raid_bdev1", 00:23:22.330 "uuid": "733cd81c-7ff6-46a5-b8c5-19b2f5362197", 00:23:22.330 "strip_size_kb": 64, 00:23:22.330 "state": "online", 00:23:22.330 "raid_level": "raid5f", 00:23:22.330 "superblock": false, 00:23:22.330 "num_base_bdevs": 4, 00:23:22.330 "num_base_bdevs_discovered": 4, 00:23:22.330 "num_base_bdevs_operational": 4, 00:23:22.330 "base_bdevs_list": [ 00:23:22.330 { 00:23:22.330 "name": "spare", 00:23:22.330 "uuid": "7a9e3423-d7cf-5571-bd8b-6e1ddbc77cf8", 00:23:22.330 "is_configured": true, 00:23:22.330 "data_offset": 0, 00:23:22.330 "data_size": 65536 00:23:22.330 }, 00:23:22.330 { 00:23:22.330 "name": "BaseBdev2", 00:23:22.330 "uuid": "bd2ed000-2d1b-5756-a2eb-898f8afc1f89", 00:23:22.330 "is_configured": true, 00:23:22.330 "data_offset": 0, 00:23:22.330 "data_size": 65536 00:23:22.330 }, 00:23:22.330 { 00:23:22.330 "name": "BaseBdev3", 00:23:22.330 "uuid": "0c844a61-fb73-5c33-ac19-0724870485c3", 00:23:22.330 "is_configured": true, 00:23:22.330 "data_offset": 0, 00:23:22.330 "data_size": 65536 00:23:22.330 }, 00:23:22.330 { 00:23:22.330 "name": "BaseBdev4", 00:23:22.330 "uuid": "fcb1706f-23b4-5cf6-b04c-845d490388f4", 00:23:22.330 "is_configured": true, 00:23:22.330 "data_offset": 0, 00:23:22.330 "data_size": 65536 00:23:22.330 } 00:23:22.330 ] 00:23:22.330 }' 00:23:22.330 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:22.589 "name": "raid_bdev1", 00:23:22.589 "uuid": "733cd81c-7ff6-46a5-b8c5-19b2f5362197", 00:23:22.589 "strip_size_kb": 64, 00:23:22.589 "state": "online", 00:23:22.589 "raid_level": "raid5f", 00:23:22.589 "superblock": false, 00:23:22.589 "num_base_bdevs": 4, 00:23:22.589 "num_base_bdevs_discovered": 4, 00:23:22.589 "num_base_bdevs_operational": 4, 00:23:22.589 "base_bdevs_list": [ 00:23:22.589 { 00:23:22.589 "name": "spare", 00:23:22.589 "uuid": "7a9e3423-d7cf-5571-bd8b-6e1ddbc77cf8", 00:23:22.589 "is_configured": true, 00:23:22.589 "data_offset": 0, 00:23:22.589 "data_size": 65536 00:23:22.589 }, 00:23:22.589 { 00:23:22.589 "name": "BaseBdev2", 00:23:22.589 "uuid": "bd2ed000-2d1b-5756-a2eb-898f8afc1f89", 00:23:22.589 "is_configured": true, 00:23:22.589 "data_offset": 0, 00:23:22.589 "data_size": 65536 00:23:22.589 }, 00:23:22.589 { 00:23:22.589 "name": "BaseBdev3", 00:23:22.589 "uuid": "0c844a61-fb73-5c33-ac19-0724870485c3", 00:23:22.589 "is_configured": true, 00:23:22.589 "data_offset": 0, 00:23:22.589 "data_size": 65536 00:23:22.589 }, 00:23:22.589 { 00:23:22.589 "name": "BaseBdev4", 00:23:22.589 "uuid": "fcb1706f-23b4-5cf6-b04c-845d490388f4", 00:23:22.589 "is_configured": true, 00:23:22.589 "data_offset": 0, 00:23:22.589 "data_size": 65536 00:23:22.589 } 00:23:22.589 ] 00:23:22.589 }' 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:22.589 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.590 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.590 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.590 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.590 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.590 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.590 09:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.590 09:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.590 09:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.848 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:22.848 "name": "raid_bdev1", 00:23:22.848 "uuid": "733cd81c-7ff6-46a5-b8c5-19b2f5362197", 00:23:22.848 "strip_size_kb": 64, 00:23:22.848 "state": "online", 00:23:22.848 "raid_level": "raid5f", 00:23:22.848 "superblock": false, 00:23:22.848 "num_base_bdevs": 4, 00:23:22.848 "num_base_bdevs_discovered": 4, 00:23:22.848 "num_base_bdevs_operational": 4, 00:23:22.848 "base_bdevs_list": [ 00:23:22.848 { 00:23:22.848 "name": "spare", 00:23:22.848 "uuid": "7a9e3423-d7cf-5571-bd8b-6e1ddbc77cf8", 00:23:22.848 "is_configured": true, 00:23:22.848 "data_offset": 0, 00:23:22.848 "data_size": 65536 00:23:22.848 }, 00:23:22.848 { 00:23:22.848 "name": "BaseBdev2", 00:23:22.848 "uuid": "bd2ed000-2d1b-5756-a2eb-898f8afc1f89", 00:23:22.848 "is_configured": true, 00:23:22.849 "data_offset": 0, 00:23:22.849 "data_size": 65536 00:23:22.849 }, 00:23:22.849 { 00:23:22.849 "name": "BaseBdev3", 00:23:22.849 "uuid": "0c844a61-fb73-5c33-ac19-0724870485c3", 00:23:22.849 "is_configured": true, 00:23:22.849 "data_offset": 0, 00:23:22.849 "data_size": 65536 00:23:22.849 }, 00:23:22.849 { 00:23:22.849 "name": "BaseBdev4", 00:23:22.849 "uuid": "fcb1706f-23b4-5cf6-b04c-845d490388f4", 00:23:22.849 "is_configured": true, 00:23:22.849 "data_offset": 0, 00:23:22.849 "data_size": 65536 00:23:22.849 } 00:23:22.849 ] 00:23:22.849 }' 00:23:22.849 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:22.849 09:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.107 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:23.107 09:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.107 09:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.107 [2024-10-15 09:23:06.974265] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:23.107 [2024-10-15 09:23:06.974327] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:23.107 [2024-10-15 09:23:06.974477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:23.107 [2024-10-15 09:23:06.974636] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:23.107 [2024-10-15 09:23:06.974664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:23.107 09:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.107 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.107 09:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.107 09:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:23:23.107 09:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.107 09:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.365 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:23.365 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:23.365 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:23.365 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:23.365 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:23.365 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:23.365 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:23.365 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:23.365 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:23.365 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:23.365 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:23.365 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:23.365 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:23.623 /dev/nbd0 00:23:23.623 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:23.623 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:23.623 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:23.623 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:23:23.623 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:23.623 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:23.623 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:23.623 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:23:23.623 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:23.623 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:23.624 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:23.624 1+0 records in 00:23:23.624 1+0 records out 00:23:23.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371569 s, 11.0 MB/s 00:23:23.624 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:23.624 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:23:23.624 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:23.624 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:23.624 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:23:23.624 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:23.624 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:23.624 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:23.882 /dev/nbd1 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:23.882 1+0 records in 00:23:23.882 1+0 records out 00:23:23.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470048 s, 8.7 MB/s 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:23.882 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:24.141 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:24.141 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:24.141 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:24.141 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:24.141 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:24.141 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:24.141 09:23:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:24.399 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:24.399 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:24.399 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:24.399 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:24.399 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:24.399 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:24.399 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:24.399 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:24.399 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:24.399 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85321 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 85321 ']' 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 85321 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85321 00:23:24.657 killing process with pid 85321 00:23:24.657 Received shutdown signal, test time was about 60.000000 seconds 00:23:24.657 00:23:24.657 Latency(us) 00:23:24.657 [2024-10-15T09:23:08.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.657 [2024-10-15T09:23:08.585Z] =================================================================================================================== 00:23:24.657 [2024-10-15T09:23:08.585Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85321' 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 85321 00:23:24.657 09:23:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 85321 00:23:24.657 [2024-10-15 09:23:08.578839] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:25.224 [2024-10-15 09:23:09.071590] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:26.601 ************************************ 00:23:26.601 END TEST raid5f_rebuild_test 00:23:26.601 ************************************ 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:23:26.601 00:23:26.601 real 0m20.332s 00:23:26.601 user 0m25.231s 00:23:26.601 sys 0m2.363s 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.601 09:23:10 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:23:26.601 09:23:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:23:26.601 09:23:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:26.601 09:23:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:26.601 ************************************ 00:23:26.601 START TEST raid5f_rebuild_test_sb 00:23:26.601 ************************************ 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85829 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85829 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 85829 ']' 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:26.601 09:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.601 [2024-10-15 09:23:10.394235] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:23:26.601 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:26.601 Zero copy mechanism will not be used. 00:23:26.601 [2024-10-15 09:23:10.394457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85829 ] 00:23:26.859 [2024-10-15 09:23:10.569439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.859 [2024-10-15 09:23:10.716037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.117 [2024-10-15 09:23:10.944425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:27.117 [2024-10-15 09:23:10.944530] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.685 BaseBdev1_malloc 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.685 [2024-10-15 09:23:11.393280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:27.685 [2024-10-15 09:23:11.393369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.685 [2024-10-15 09:23:11.393407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:27.685 [2024-10-15 09:23:11.393427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.685 [2024-10-15 09:23:11.396391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.685 [2024-10-15 09:23:11.396446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:27.685 BaseBdev1 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.685 BaseBdev2_malloc 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.685 [2024-10-15 09:23:11.449808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:27.685 [2024-10-15 09:23:11.449889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.685 [2024-10-15 09:23:11.449920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:27.685 [2024-10-15 09:23:11.449939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.685 [2024-10-15 09:23:11.452784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.685 [2024-10-15 09:23:11.452833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:27.685 BaseBdev2 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.685 BaseBdev3_malloc 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.685 [2024-10-15 09:23:11.516790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:27.685 [2024-10-15 09:23:11.516910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.685 [2024-10-15 09:23:11.516945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:27.685 [2024-10-15 09:23:11.516964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.685 [2024-10-15 09:23:11.519985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.685 [2024-10-15 09:23:11.520040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:27.685 BaseBdev3 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.685 BaseBdev4_malloc 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.685 [2024-10-15 09:23:11.578284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:27.685 [2024-10-15 09:23:11.578392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.685 [2024-10-15 09:23:11.578435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:27.685 [2024-10-15 09:23:11.578454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.685 [2024-10-15 09:23:11.581428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.685 [2024-10-15 09:23:11.581498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:27.685 BaseBdev4 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.685 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.945 spare_malloc 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.945 spare_delay 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.945 [2024-10-15 09:23:11.643762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:27.945 [2024-10-15 09:23:11.643856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.945 [2024-10-15 09:23:11.643904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:27.945 [2024-10-15 09:23:11.643923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.945 [2024-10-15 09:23:11.646967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.945 [2024-10-15 09:23:11.647046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:27.945 spare 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.945 [2024-10-15 09:23:11.651911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:27.945 [2024-10-15 09:23:11.654661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:27.945 [2024-10-15 09:23:11.654760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:27.945 [2024-10-15 09:23:11.654849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:27.945 [2024-10-15 09:23:11.655201] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:27.945 [2024-10-15 09:23:11.655238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:27.945 [2024-10-15 09:23:11.655572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:27.945 [2024-10-15 09:23:11.662569] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:27.945 [2024-10-15 09:23:11.662600] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:27.945 [2024-10-15 09:23:11.662881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:27.945 "name": "raid_bdev1", 00:23:27.945 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:27.945 "strip_size_kb": 64, 00:23:27.945 "state": "online", 00:23:27.945 "raid_level": "raid5f", 00:23:27.945 "superblock": true, 00:23:27.945 "num_base_bdevs": 4, 00:23:27.945 "num_base_bdevs_discovered": 4, 00:23:27.945 "num_base_bdevs_operational": 4, 00:23:27.945 "base_bdevs_list": [ 00:23:27.945 { 00:23:27.945 "name": "BaseBdev1", 00:23:27.945 "uuid": "d008c7d1-1212-5b19-8998-b805ad1943f8", 00:23:27.945 "is_configured": true, 00:23:27.945 "data_offset": 2048, 00:23:27.945 "data_size": 63488 00:23:27.945 }, 00:23:27.945 { 00:23:27.945 "name": "BaseBdev2", 00:23:27.945 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:27.945 "is_configured": true, 00:23:27.945 "data_offset": 2048, 00:23:27.945 "data_size": 63488 00:23:27.945 }, 00:23:27.945 { 00:23:27.945 "name": "BaseBdev3", 00:23:27.945 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:27.945 "is_configured": true, 00:23:27.945 "data_offset": 2048, 00:23:27.945 "data_size": 63488 00:23:27.945 }, 00:23:27.945 { 00:23:27.945 "name": "BaseBdev4", 00:23:27.945 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:27.945 "is_configured": true, 00:23:27.945 "data_offset": 2048, 00:23:27.945 "data_size": 63488 00:23:27.945 } 00:23:27.945 ] 00:23:27.945 }' 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:27.945 09:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.513 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:28.513 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.513 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.513 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:28.513 [2024-10-15 09:23:12.171436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:28.513 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:28.514 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:28.851 [2024-10-15 09:23:12.563356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:28.851 /dev/nbd0 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:28.851 1+0 records in 00:23:28.851 1+0 records out 00:23:28.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308538 s, 13.3 MB/s 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:23:28.851 09:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:23:29.444 496+0 records in 00:23:29.444 496+0 records out 00:23:29.444 97517568 bytes (98 MB, 93 MiB) copied, 0.667694 s, 146 MB/s 00:23:29.444 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:29.444 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:29.444 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:29.444 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:29.444 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:29.444 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:29.444 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:29.703 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:29.703 [2024-10-15 09:23:13.613313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:29.703 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:29.703 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:29.703 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:29.703 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:29.703 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:29.703 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:29.703 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:29.703 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:29.703 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.703 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.703 [2024-10-15 09:23:13.625818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:29.961 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.961 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:29.961 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:29.961 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:29.961 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:29.961 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:29.961 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:29.961 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:29.961 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:29.961 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:29.961 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:29.961 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.961 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.961 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.961 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.961 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.961 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:29.961 "name": "raid_bdev1", 00:23:29.961 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:29.962 "strip_size_kb": 64, 00:23:29.962 "state": "online", 00:23:29.962 "raid_level": "raid5f", 00:23:29.962 "superblock": true, 00:23:29.962 "num_base_bdevs": 4, 00:23:29.962 "num_base_bdevs_discovered": 3, 00:23:29.962 "num_base_bdevs_operational": 3, 00:23:29.962 "base_bdevs_list": [ 00:23:29.962 { 00:23:29.962 "name": null, 00:23:29.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.962 "is_configured": false, 00:23:29.962 "data_offset": 0, 00:23:29.962 "data_size": 63488 00:23:29.962 }, 00:23:29.962 { 00:23:29.962 "name": "BaseBdev2", 00:23:29.962 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:29.962 "is_configured": true, 00:23:29.962 "data_offset": 2048, 00:23:29.962 "data_size": 63488 00:23:29.962 }, 00:23:29.962 { 00:23:29.962 "name": "BaseBdev3", 00:23:29.962 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:29.962 "is_configured": true, 00:23:29.962 "data_offset": 2048, 00:23:29.962 "data_size": 63488 00:23:29.962 }, 00:23:29.962 { 00:23:29.962 "name": "BaseBdev4", 00:23:29.962 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:29.962 "is_configured": true, 00:23:29.962 "data_offset": 2048, 00:23:29.962 "data_size": 63488 00:23:29.962 } 00:23:29.962 ] 00:23:29.962 }' 00:23:29.962 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:29.962 09:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.220 09:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:30.220 09:23:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.220 09:23:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.220 [2024-10-15 09:23:14.133981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:30.479 [2024-10-15 09:23:14.149383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:23:30.479 09:23:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.479 09:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:30.479 [2024-10-15 09:23:14.159042] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:31.417 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:31.417 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:31.417 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:31.417 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:31.417 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:31.417 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.417 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.417 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.417 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.417 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.417 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:31.417 "name": "raid_bdev1", 00:23:31.417 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:31.417 "strip_size_kb": 64, 00:23:31.417 "state": "online", 00:23:31.417 "raid_level": "raid5f", 00:23:31.417 "superblock": true, 00:23:31.417 "num_base_bdevs": 4, 00:23:31.417 "num_base_bdevs_discovered": 4, 00:23:31.417 "num_base_bdevs_operational": 4, 00:23:31.417 "process": { 00:23:31.417 "type": "rebuild", 00:23:31.417 "target": "spare", 00:23:31.417 "progress": { 00:23:31.417 "blocks": 17280, 00:23:31.417 "percent": 9 00:23:31.417 } 00:23:31.417 }, 00:23:31.417 "base_bdevs_list": [ 00:23:31.417 { 00:23:31.417 "name": "spare", 00:23:31.417 "uuid": "e3ff7895-e0b7-589a-89d2-2e972ebcc411", 00:23:31.417 "is_configured": true, 00:23:31.417 "data_offset": 2048, 00:23:31.417 "data_size": 63488 00:23:31.417 }, 00:23:31.417 { 00:23:31.417 "name": "BaseBdev2", 00:23:31.417 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:31.417 "is_configured": true, 00:23:31.417 "data_offset": 2048, 00:23:31.417 "data_size": 63488 00:23:31.417 }, 00:23:31.417 { 00:23:31.417 "name": "BaseBdev3", 00:23:31.417 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:31.417 "is_configured": true, 00:23:31.417 "data_offset": 2048, 00:23:31.417 "data_size": 63488 00:23:31.417 }, 00:23:31.417 { 00:23:31.417 "name": "BaseBdev4", 00:23:31.417 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:31.417 "is_configured": true, 00:23:31.417 "data_offset": 2048, 00:23:31.417 "data_size": 63488 00:23:31.417 } 00:23:31.417 ] 00:23:31.417 }' 00:23:31.417 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:31.417 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:31.417 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:31.417 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:31.417 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:31.417 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.417 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.417 [2024-10-15 09:23:15.337568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:31.676 [2024-10-15 09:23:15.375049] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:31.676 [2024-10-15 09:23:15.375253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.676 [2024-10-15 09:23:15.375285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:31.676 [2024-10-15 09:23:15.375301] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.676 "name": "raid_bdev1", 00:23:31.676 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:31.676 "strip_size_kb": 64, 00:23:31.676 "state": "online", 00:23:31.676 "raid_level": "raid5f", 00:23:31.676 "superblock": true, 00:23:31.676 "num_base_bdevs": 4, 00:23:31.676 "num_base_bdevs_discovered": 3, 00:23:31.676 "num_base_bdevs_operational": 3, 00:23:31.676 "base_bdevs_list": [ 00:23:31.676 { 00:23:31.676 "name": null, 00:23:31.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.676 "is_configured": false, 00:23:31.676 "data_offset": 0, 00:23:31.676 "data_size": 63488 00:23:31.676 }, 00:23:31.676 { 00:23:31.676 "name": "BaseBdev2", 00:23:31.676 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:31.676 "is_configured": true, 00:23:31.676 "data_offset": 2048, 00:23:31.676 "data_size": 63488 00:23:31.676 }, 00:23:31.676 { 00:23:31.676 "name": "BaseBdev3", 00:23:31.676 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:31.676 "is_configured": true, 00:23:31.676 "data_offset": 2048, 00:23:31.676 "data_size": 63488 00:23:31.676 }, 00:23:31.676 { 00:23:31.676 "name": "BaseBdev4", 00:23:31.676 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:31.676 "is_configured": true, 00:23:31.676 "data_offset": 2048, 00:23:31.676 "data_size": 63488 00:23:31.676 } 00:23:31.676 ] 00:23:31.676 }' 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.676 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.244 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:32.244 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:32.244 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:32.244 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:32.244 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:32.244 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.244 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.244 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.244 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.244 09:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.244 09:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:32.244 "name": "raid_bdev1", 00:23:32.244 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:32.244 "strip_size_kb": 64, 00:23:32.244 "state": "online", 00:23:32.244 "raid_level": "raid5f", 00:23:32.244 "superblock": true, 00:23:32.244 "num_base_bdevs": 4, 00:23:32.244 "num_base_bdevs_discovered": 3, 00:23:32.244 "num_base_bdevs_operational": 3, 00:23:32.244 "base_bdevs_list": [ 00:23:32.244 { 00:23:32.244 "name": null, 00:23:32.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.244 "is_configured": false, 00:23:32.244 "data_offset": 0, 00:23:32.244 "data_size": 63488 00:23:32.244 }, 00:23:32.244 { 00:23:32.244 "name": "BaseBdev2", 00:23:32.244 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:32.244 "is_configured": true, 00:23:32.244 "data_offset": 2048, 00:23:32.244 "data_size": 63488 00:23:32.244 }, 00:23:32.244 { 00:23:32.244 "name": "BaseBdev3", 00:23:32.244 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:32.244 "is_configured": true, 00:23:32.244 "data_offset": 2048, 00:23:32.244 "data_size": 63488 00:23:32.244 }, 00:23:32.244 { 00:23:32.244 "name": "BaseBdev4", 00:23:32.244 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:32.244 "is_configured": true, 00:23:32.244 "data_offset": 2048, 00:23:32.244 "data_size": 63488 00:23:32.244 } 00:23:32.244 ] 00:23:32.244 }' 00:23:32.244 09:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:32.244 09:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:32.244 09:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:32.244 09:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:32.244 09:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:32.244 09:23:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.244 09:23:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.244 [2024-10-15 09:23:16.121889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:32.244 [2024-10-15 09:23:16.136239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:23:32.244 09:23:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.244 09:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:32.244 [2024-10-15 09:23:16.145587] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:33.623 "name": "raid_bdev1", 00:23:33.623 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:33.623 "strip_size_kb": 64, 00:23:33.623 "state": "online", 00:23:33.623 "raid_level": "raid5f", 00:23:33.623 "superblock": true, 00:23:33.623 "num_base_bdevs": 4, 00:23:33.623 "num_base_bdevs_discovered": 4, 00:23:33.623 "num_base_bdevs_operational": 4, 00:23:33.623 "process": { 00:23:33.623 "type": "rebuild", 00:23:33.623 "target": "spare", 00:23:33.623 "progress": { 00:23:33.623 "blocks": 17280, 00:23:33.623 "percent": 9 00:23:33.623 } 00:23:33.623 }, 00:23:33.623 "base_bdevs_list": [ 00:23:33.623 { 00:23:33.623 "name": "spare", 00:23:33.623 "uuid": "e3ff7895-e0b7-589a-89d2-2e972ebcc411", 00:23:33.623 "is_configured": true, 00:23:33.623 "data_offset": 2048, 00:23:33.623 "data_size": 63488 00:23:33.623 }, 00:23:33.623 { 00:23:33.623 "name": "BaseBdev2", 00:23:33.623 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:33.623 "is_configured": true, 00:23:33.623 "data_offset": 2048, 00:23:33.623 "data_size": 63488 00:23:33.623 }, 00:23:33.623 { 00:23:33.623 "name": "BaseBdev3", 00:23:33.623 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:33.623 "is_configured": true, 00:23:33.623 "data_offset": 2048, 00:23:33.623 "data_size": 63488 00:23:33.623 }, 00:23:33.623 { 00:23:33.623 "name": "BaseBdev4", 00:23:33.623 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:33.623 "is_configured": true, 00:23:33.623 "data_offset": 2048, 00:23:33.623 "data_size": 63488 00:23:33.623 } 00:23:33.623 ] 00:23:33.623 }' 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:33.623 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=708 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:33.623 "name": "raid_bdev1", 00:23:33.623 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:33.623 "strip_size_kb": 64, 00:23:33.623 "state": "online", 00:23:33.623 "raid_level": "raid5f", 00:23:33.623 "superblock": true, 00:23:33.623 "num_base_bdevs": 4, 00:23:33.623 "num_base_bdevs_discovered": 4, 00:23:33.623 "num_base_bdevs_operational": 4, 00:23:33.623 "process": { 00:23:33.623 "type": "rebuild", 00:23:33.623 "target": "spare", 00:23:33.623 "progress": { 00:23:33.623 "blocks": 21120, 00:23:33.623 "percent": 11 00:23:33.623 } 00:23:33.623 }, 00:23:33.623 "base_bdevs_list": [ 00:23:33.623 { 00:23:33.623 "name": "spare", 00:23:33.623 "uuid": "e3ff7895-e0b7-589a-89d2-2e972ebcc411", 00:23:33.623 "is_configured": true, 00:23:33.623 "data_offset": 2048, 00:23:33.623 "data_size": 63488 00:23:33.623 }, 00:23:33.623 { 00:23:33.623 "name": "BaseBdev2", 00:23:33.623 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:33.623 "is_configured": true, 00:23:33.623 "data_offset": 2048, 00:23:33.623 "data_size": 63488 00:23:33.623 }, 00:23:33.623 { 00:23:33.623 "name": "BaseBdev3", 00:23:33.623 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:33.623 "is_configured": true, 00:23:33.623 "data_offset": 2048, 00:23:33.623 "data_size": 63488 00:23:33.623 }, 00:23:33.623 { 00:23:33.623 "name": "BaseBdev4", 00:23:33.623 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:33.623 "is_configured": true, 00:23:33.623 "data_offset": 2048, 00:23:33.623 "data_size": 63488 00:23:33.623 } 00:23:33.623 ] 00:23:33.623 }' 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:33.623 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:33.624 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:33.624 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:33.624 09:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:34.564 09:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:34.564 09:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:34.564 09:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:34.564 09:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:34.564 09:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:34.564 09:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:34.564 09:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.564 09:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.564 09:23:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.564 09:23:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.564 09:23:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.823 09:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:34.823 "name": "raid_bdev1", 00:23:34.823 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:34.823 "strip_size_kb": 64, 00:23:34.823 "state": "online", 00:23:34.823 "raid_level": "raid5f", 00:23:34.823 "superblock": true, 00:23:34.823 "num_base_bdevs": 4, 00:23:34.823 "num_base_bdevs_discovered": 4, 00:23:34.823 "num_base_bdevs_operational": 4, 00:23:34.823 "process": { 00:23:34.823 "type": "rebuild", 00:23:34.823 "target": "spare", 00:23:34.823 "progress": { 00:23:34.823 "blocks": 42240, 00:23:34.823 "percent": 22 00:23:34.823 } 00:23:34.823 }, 00:23:34.823 "base_bdevs_list": [ 00:23:34.823 { 00:23:34.823 "name": "spare", 00:23:34.823 "uuid": "e3ff7895-e0b7-589a-89d2-2e972ebcc411", 00:23:34.823 "is_configured": true, 00:23:34.823 "data_offset": 2048, 00:23:34.823 "data_size": 63488 00:23:34.823 }, 00:23:34.823 { 00:23:34.823 "name": "BaseBdev2", 00:23:34.823 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:34.823 "is_configured": true, 00:23:34.823 "data_offset": 2048, 00:23:34.823 "data_size": 63488 00:23:34.823 }, 00:23:34.823 { 00:23:34.823 "name": "BaseBdev3", 00:23:34.823 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:34.823 "is_configured": true, 00:23:34.823 "data_offset": 2048, 00:23:34.823 "data_size": 63488 00:23:34.823 }, 00:23:34.823 { 00:23:34.823 "name": "BaseBdev4", 00:23:34.823 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:34.823 "is_configured": true, 00:23:34.823 "data_offset": 2048, 00:23:34.823 "data_size": 63488 00:23:34.823 } 00:23:34.823 ] 00:23:34.823 }' 00:23:34.823 09:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:34.823 09:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:34.823 09:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:34.823 09:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:34.823 09:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:35.761 09:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:35.761 09:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:35.761 09:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:35.761 09:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:35.761 09:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:35.761 09:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:35.761 09:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.761 09:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.761 09:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.761 09:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.761 09:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.761 09:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:35.761 "name": "raid_bdev1", 00:23:35.761 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:35.761 "strip_size_kb": 64, 00:23:35.761 "state": "online", 00:23:35.761 "raid_level": "raid5f", 00:23:35.761 "superblock": true, 00:23:35.761 "num_base_bdevs": 4, 00:23:35.761 "num_base_bdevs_discovered": 4, 00:23:35.761 "num_base_bdevs_operational": 4, 00:23:35.761 "process": { 00:23:35.761 "type": "rebuild", 00:23:35.761 "target": "spare", 00:23:35.761 "progress": { 00:23:35.761 "blocks": 65280, 00:23:35.761 "percent": 34 00:23:35.761 } 00:23:35.761 }, 00:23:35.761 "base_bdevs_list": [ 00:23:35.762 { 00:23:35.762 "name": "spare", 00:23:35.762 "uuid": "e3ff7895-e0b7-589a-89d2-2e972ebcc411", 00:23:35.762 "is_configured": true, 00:23:35.762 "data_offset": 2048, 00:23:35.762 "data_size": 63488 00:23:35.762 }, 00:23:35.762 { 00:23:35.762 "name": "BaseBdev2", 00:23:35.762 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:35.762 "is_configured": true, 00:23:35.762 "data_offset": 2048, 00:23:35.762 "data_size": 63488 00:23:35.762 }, 00:23:35.762 { 00:23:35.762 "name": "BaseBdev3", 00:23:35.762 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:35.762 "is_configured": true, 00:23:35.762 "data_offset": 2048, 00:23:35.762 "data_size": 63488 00:23:35.762 }, 00:23:35.762 { 00:23:35.762 "name": "BaseBdev4", 00:23:35.762 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:35.762 "is_configured": true, 00:23:35.762 "data_offset": 2048, 00:23:35.762 "data_size": 63488 00:23:35.762 } 00:23:35.762 ] 00:23:35.762 }' 00:23:35.762 09:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:36.020 09:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:36.020 09:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:36.020 09:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.020 09:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:36.957 09:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:36.957 09:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.957 09:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:36.957 09:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:36.957 09:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:36.957 09:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:36.957 09:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.957 09:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.957 09:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.957 09:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.957 09:23:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.957 09:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:36.957 "name": "raid_bdev1", 00:23:36.957 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:36.957 "strip_size_kb": 64, 00:23:36.957 "state": "online", 00:23:36.957 "raid_level": "raid5f", 00:23:36.957 "superblock": true, 00:23:36.957 "num_base_bdevs": 4, 00:23:36.957 "num_base_bdevs_discovered": 4, 00:23:36.957 "num_base_bdevs_operational": 4, 00:23:36.957 "process": { 00:23:36.957 "type": "rebuild", 00:23:36.957 "target": "spare", 00:23:36.957 "progress": { 00:23:36.957 "blocks": 86400, 00:23:36.957 "percent": 45 00:23:36.957 } 00:23:36.957 }, 00:23:36.957 "base_bdevs_list": [ 00:23:36.957 { 00:23:36.957 "name": "spare", 00:23:36.957 "uuid": "e3ff7895-e0b7-589a-89d2-2e972ebcc411", 00:23:36.957 "is_configured": true, 00:23:36.957 "data_offset": 2048, 00:23:36.957 "data_size": 63488 00:23:36.957 }, 00:23:36.957 { 00:23:36.957 "name": "BaseBdev2", 00:23:36.957 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:36.957 "is_configured": true, 00:23:36.957 "data_offset": 2048, 00:23:36.957 "data_size": 63488 00:23:36.957 }, 00:23:36.957 { 00:23:36.957 "name": "BaseBdev3", 00:23:36.957 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:36.957 "is_configured": true, 00:23:36.957 "data_offset": 2048, 00:23:36.957 "data_size": 63488 00:23:36.957 }, 00:23:36.957 { 00:23:36.957 "name": "BaseBdev4", 00:23:36.957 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:36.957 "is_configured": true, 00:23:36.957 "data_offset": 2048, 00:23:36.957 "data_size": 63488 00:23:36.957 } 00:23:36.957 ] 00:23:36.957 }' 00:23:36.957 09:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:37.216 09:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:37.216 09:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:37.216 09:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:37.216 09:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:38.154 09:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:38.154 09:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:38.154 09:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:38.154 09:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:38.154 09:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:38.154 09:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:38.154 09:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.154 09:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.154 09:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.154 09:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.154 09:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.154 09:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:38.154 "name": "raid_bdev1", 00:23:38.154 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:38.154 "strip_size_kb": 64, 00:23:38.154 "state": "online", 00:23:38.154 "raid_level": "raid5f", 00:23:38.154 "superblock": true, 00:23:38.154 "num_base_bdevs": 4, 00:23:38.154 "num_base_bdevs_discovered": 4, 00:23:38.154 "num_base_bdevs_operational": 4, 00:23:38.154 "process": { 00:23:38.154 "type": "rebuild", 00:23:38.154 "target": "spare", 00:23:38.154 "progress": { 00:23:38.154 "blocks": 109440, 00:23:38.154 "percent": 57 00:23:38.154 } 00:23:38.154 }, 00:23:38.154 "base_bdevs_list": [ 00:23:38.154 { 00:23:38.154 "name": "spare", 00:23:38.154 "uuid": "e3ff7895-e0b7-589a-89d2-2e972ebcc411", 00:23:38.154 "is_configured": true, 00:23:38.154 "data_offset": 2048, 00:23:38.154 "data_size": 63488 00:23:38.154 }, 00:23:38.154 { 00:23:38.154 "name": "BaseBdev2", 00:23:38.154 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:38.154 "is_configured": true, 00:23:38.154 "data_offset": 2048, 00:23:38.154 "data_size": 63488 00:23:38.154 }, 00:23:38.154 { 00:23:38.154 "name": "BaseBdev3", 00:23:38.154 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:38.154 "is_configured": true, 00:23:38.154 "data_offset": 2048, 00:23:38.154 "data_size": 63488 00:23:38.154 }, 00:23:38.154 { 00:23:38.154 "name": "BaseBdev4", 00:23:38.154 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:38.154 "is_configured": true, 00:23:38.154 "data_offset": 2048, 00:23:38.154 "data_size": 63488 00:23:38.154 } 00:23:38.154 ] 00:23:38.154 }' 00:23:38.154 09:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:38.154 09:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:38.154 09:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:38.413 09:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:38.413 09:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:39.351 09:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:39.351 09:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:39.351 09:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:39.351 09:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:39.351 09:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:39.351 09:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:39.351 09:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.351 09:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.351 09:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.351 09:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.351 09:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.351 09:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:39.351 "name": "raid_bdev1", 00:23:39.351 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:39.351 "strip_size_kb": 64, 00:23:39.351 "state": "online", 00:23:39.351 "raid_level": "raid5f", 00:23:39.351 "superblock": true, 00:23:39.351 "num_base_bdevs": 4, 00:23:39.351 "num_base_bdevs_discovered": 4, 00:23:39.351 "num_base_bdevs_operational": 4, 00:23:39.351 "process": { 00:23:39.351 "type": "rebuild", 00:23:39.351 "target": "spare", 00:23:39.351 "progress": { 00:23:39.351 "blocks": 130560, 00:23:39.351 "percent": 68 00:23:39.351 } 00:23:39.351 }, 00:23:39.351 "base_bdevs_list": [ 00:23:39.351 { 00:23:39.351 "name": "spare", 00:23:39.351 "uuid": "e3ff7895-e0b7-589a-89d2-2e972ebcc411", 00:23:39.351 "is_configured": true, 00:23:39.351 "data_offset": 2048, 00:23:39.351 "data_size": 63488 00:23:39.351 }, 00:23:39.351 { 00:23:39.351 "name": "BaseBdev2", 00:23:39.351 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:39.351 "is_configured": true, 00:23:39.351 "data_offset": 2048, 00:23:39.351 "data_size": 63488 00:23:39.351 }, 00:23:39.351 { 00:23:39.351 "name": "BaseBdev3", 00:23:39.351 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:39.351 "is_configured": true, 00:23:39.351 "data_offset": 2048, 00:23:39.351 "data_size": 63488 00:23:39.351 }, 00:23:39.351 { 00:23:39.351 "name": "BaseBdev4", 00:23:39.351 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:39.351 "is_configured": true, 00:23:39.351 "data_offset": 2048, 00:23:39.351 "data_size": 63488 00:23:39.351 } 00:23:39.351 ] 00:23:39.351 }' 00:23:39.351 09:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:39.351 09:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:39.351 09:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:39.611 09:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:39.611 09:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:40.548 09:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:40.548 09:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:40.548 09:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:40.548 09:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:40.548 09:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:40.548 09:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:40.548 09:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.548 09:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.548 09:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:40.548 09:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.548 09:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.548 09:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:40.548 "name": "raid_bdev1", 00:23:40.548 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:40.548 "strip_size_kb": 64, 00:23:40.548 "state": "online", 00:23:40.548 "raid_level": "raid5f", 00:23:40.548 "superblock": true, 00:23:40.548 "num_base_bdevs": 4, 00:23:40.548 "num_base_bdevs_discovered": 4, 00:23:40.548 "num_base_bdevs_operational": 4, 00:23:40.548 "process": { 00:23:40.548 "type": "rebuild", 00:23:40.548 "target": "spare", 00:23:40.548 "progress": { 00:23:40.548 "blocks": 153600, 00:23:40.548 "percent": 80 00:23:40.548 } 00:23:40.548 }, 00:23:40.548 "base_bdevs_list": [ 00:23:40.548 { 00:23:40.548 "name": "spare", 00:23:40.548 "uuid": "e3ff7895-e0b7-589a-89d2-2e972ebcc411", 00:23:40.548 "is_configured": true, 00:23:40.548 "data_offset": 2048, 00:23:40.548 "data_size": 63488 00:23:40.548 }, 00:23:40.548 { 00:23:40.548 "name": "BaseBdev2", 00:23:40.548 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:40.548 "is_configured": true, 00:23:40.548 "data_offset": 2048, 00:23:40.548 "data_size": 63488 00:23:40.548 }, 00:23:40.548 { 00:23:40.548 "name": "BaseBdev3", 00:23:40.548 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:40.548 "is_configured": true, 00:23:40.548 "data_offset": 2048, 00:23:40.548 "data_size": 63488 00:23:40.548 }, 00:23:40.548 { 00:23:40.548 "name": "BaseBdev4", 00:23:40.548 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:40.548 "is_configured": true, 00:23:40.548 "data_offset": 2048, 00:23:40.548 "data_size": 63488 00:23:40.548 } 00:23:40.548 ] 00:23:40.548 }' 00:23:40.548 09:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:40.548 09:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:40.548 09:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:40.548 09:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:40.548 09:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:41.925 09:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:41.925 09:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:41.925 09:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:41.925 09:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:41.925 09:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:41.925 09:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:41.925 09:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.925 09:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.925 09:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.925 09:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.925 09:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.925 09:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:41.925 "name": "raid_bdev1", 00:23:41.925 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:41.925 "strip_size_kb": 64, 00:23:41.925 "state": "online", 00:23:41.925 "raid_level": "raid5f", 00:23:41.925 "superblock": true, 00:23:41.925 "num_base_bdevs": 4, 00:23:41.925 "num_base_bdevs_discovered": 4, 00:23:41.925 "num_base_bdevs_operational": 4, 00:23:41.925 "process": { 00:23:41.925 "type": "rebuild", 00:23:41.925 "target": "spare", 00:23:41.925 "progress": { 00:23:41.925 "blocks": 174720, 00:23:41.925 "percent": 91 00:23:41.925 } 00:23:41.925 }, 00:23:41.925 "base_bdevs_list": [ 00:23:41.925 { 00:23:41.925 "name": "spare", 00:23:41.925 "uuid": "e3ff7895-e0b7-589a-89d2-2e972ebcc411", 00:23:41.925 "is_configured": true, 00:23:41.925 "data_offset": 2048, 00:23:41.925 "data_size": 63488 00:23:41.925 }, 00:23:41.925 { 00:23:41.925 "name": "BaseBdev2", 00:23:41.925 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:41.925 "is_configured": true, 00:23:41.925 "data_offset": 2048, 00:23:41.925 "data_size": 63488 00:23:41.925 }, 00:23:41.925 { 00:23:41.925 "name": "BaseBdev3", 00:23:41.925 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:41.925 "is_configured": true, 00:23:41.925 "data_offset": 2048, 00:23:41.925 "data_size": 63488 00:23:41.925 }, 00:23:41.925 { 00:23:41.925 "name": "BaseBdev4", 00:23:41.925 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:41.925 "is_configured": true, 00:23:41.925 "data_offset": 2048, 00:23:41.925 "data_size": 63488 00:23:41.925 } 00:23:41.925 ] 00:23:41.925 }' 00:23:41.925 09:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:41.925 09:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:41.925 09:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:41.925 09:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:41.925 09:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:42.492 [2024-10-15 09:23:26.276590] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:42.492 [2024-10-15 09:23:26.276737] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:42.492 [2024-10-15 09:23:26.276991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:42.752 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:42.752 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:42.752 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:42.752 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:42.752 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:42.752 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:42.752 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.752 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.752 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.752 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.752 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:42.752 "name": "raid_bdev1", 00:23:42.752 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:42.752 "strip_size_kb": 64, 00:23:42.752 "state": "online", 00:23:42.752 "raid_level": "raid5f", 00:23:42.752 "superblock": true, 00:23:42.752 "num_base_bdevs": 4, 00:23:42.752 "num_base_bdevs_discovered": 4, 00:23:42.752 "num_base_bdevs_operational": 4, 00:23:42.752 "base_bdevs_list": [ 00:23:42.752 { 00:23:42.752 "name": "spare", 00:23:42.752 "uuid": "e3ff7895-e0b7-589a-89d2-2e972ebcc411", 00:23:42.752 "is_configured": true, 00:23:42.752 "data_offset": 2048, 00:23:42.752 "data_size": 63488 00:23:42.752 }, 00:23:42.752 { 00:23:42.752 "name": "BaseBdev2", 00:23:42.752 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:42.752 "is_configured": true, 00:23:42.752 "data_offset": 2048, 00:23:42.752 "data_size": 63488 00:23:42.752 }, 00:23:42.752 { 00:23:42.752 "name": "BaseBdev3", 00:23:42.752 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:42.752 "is_configured": true, 00:23:42.752 "data_offset": 2048, 00:23:42.752 "data_size": 63488 00:23:42.752 }, 00:23:42.752 { 00:23:42.752 "name": "BaseBdev4", 00:23:42.752 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:42.752 "is_configured": true, 00:23:42.752 "data_offset": 2048, 00:23:42.752 "data_size": 63488 00:23:42.752 } 00:23:42.752 ] 00:23:42.752 }' 00:23:42.752 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:43.011 "name": "raid_bdev1", 00:23:43.011 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:43.011 "strip_size_kb": 64, 00:23:43.011 "state": "online", 00:23:43.011 "raid_level": "raid5f", 00:23:43.011 "superblock": true, 00:23:43.011 "num_base_bdevs": 4, 00:23:43.011 "num_base_bdevs_discovered": 4, 00:23:43.011 "num_base_bdevs_operational": 4, 00:23:43.011 "base_bdevs_list": [ 00:23:43.011 { 00:23:43.011 "name": "spare", 00:23:43.011 "uuid": "e3ff7895-e0b7-589a-89d2-2e972ebcc411", 00:23:43.011 "is_configured": true, 00:23:43.011 "data_offset": 2048, 00:23:43.011 "data_size": 63488 00:23:43.011 }, 00:23:43.011 { 00:23:43.011 "name": "BaseBdev2", 00:23:43.011 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:43.011 "is_configured": true, 00:23:43.011 "data_offset": 2048, 00:23:43.011 "data_size": 63488 00:23:43.011 }, 00:23:43.011 { 00:23:43.011 "name": "BaseBdev3", 00:23:43.011 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:43.011 "is_configured": true, 00:23:43.011 "data_offset": 2048, 00:23:43.011 "data_size": 63488 00:23:43.011 }, 00:23:43.011 { 00:23:43.011 "name": "BaseBdev4", 00:23:43.011 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:43.011 "is_configured": true, 00:23:43.011 "data_offset": 2048, 00:23:43.011 "data_size": 63488 00:23:43.011 } 00:23:43.011 ] 00:23:43.011 }' 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.011 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.324 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.324 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:43.324 "name": "raid_bdev1", 00:23:43.324 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:43.324 "strip_size_kb": 64, 00:23:43.324 "state": "online", 00:23:43.324 "raid_level": "raid5f", 00:23:43.324 "superblock": true, 00:23:43.324 "num_base_bdevs": 4, 00:23:43.324 "num_base_bdevs_discovered": 4, 00:23:43.324 "num_base_bdevs_operational": 4, 00:23:43.324 "base_bdevs_list": [ 00:23:43.324 { 00:23:43.324 "name": "spare", 00:23:43.324 "uuid": "e3ff7895-e0b7-589a-89d2-2e972ebcc411", 00:23:43.324 "is_configured": true, 00:23:43.324 "data_offset": 2048, 00:23:43.324 "data_size": 63488 00:23:43.324 }, 00:23:43.324 { 00:23:43.324 "name": "BaseBdev2", 00:23:43.324 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:43.324 "is_configured": true, 00:23:43.324 "data_offset": 2048, 00:23:43.324 "data_size": 63488 00:23:43.324 }, 00:23:43.324 { 00:23:43.324 "name": "BaseBdev3", 00:23:43.324 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:43.324 "is_configured": true, 00:23:43.324 "data_offset": 2048, 00:23:43.324 "data_size": 63488 00:23:43.324 }, 00:23:43.324 { 00:23:43.324 "name": "BaseBdev4", 00:23:43.324 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:43.324 "is_configured": true, 00:23:43.324 "data_offset": 2048, 00:23:43.324 "data_size": 63488 00:23:43.324 } 00:23:43.324 ] 00:23:43.324 }' 00:23:43.324 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:43.324 09:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.583 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:43.583 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.583 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.583 [2024-10-15 09:23:27.454844] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:43.583 [2024-10-15 09:23:27.454940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:43.583 [2024-10-15 09:23:27.455058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:43.583 [2024-10-15 09:23:27.455251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:43.583 [2024-10-15 09:23:27.455283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:43.583 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.583 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.583 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:23:43.583 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.583 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.583 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.842 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:43.842 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:43.842 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:43.842 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:43.842 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:43.842 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:43.842 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:43.842 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:43.842 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:43.842 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:43.842 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:43.842 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:43.842 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:44.101 /dev/nbd0 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:44.101 1+0 records in 00:23:44.101 1+0 records out 00:23:44.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351772 s, 11.6 MB/s 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:44.101 09:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:44.359 /dev/nbd1 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:44.359 1+0 records in 00:23:44.359 1+0 records out 00:23:44.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482554 s, 8.5 MB/s 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:44.359 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:44.618 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:44.618 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:44.618 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:44.618 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:44.618 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:44.618 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:44.618 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:44.877 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:44.877 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:44.877 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:44.877 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:44.877 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:44.877 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:44.877 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:44.877 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:44.877 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:44.877 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.136 [2024-10-15 09:23:28.986956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:45.136 [2024-10-15 09:23:28.987073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.136 [2024-10-15 09:23:28.987115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:45.136 [2024-10-15 09:23:28.987166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.136 [2024-10-15 09:23:28.990597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.136 [2024-10-15 09:23:28.990646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:45.136 [2024-10-15 09:23:28.990790] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:45.136 [2024-10-15 09:23:28.990892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:45.136 [2024-10-15 09:23:28.991099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:45.136 [2024-10-15 09:23:28.991316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:45.136 [2024-10-15 09:23:28.991445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:45.136 spare 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.136 09:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.396 [2024-10-15 09:23:29.091594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:45.396 [2024-10-15 09:23:29.091722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:45.396 [2024-10-15 09:23:29.092269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:23:45.396 [2024-10-15 09:23:29.098885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:45.396 [2024-10-15 09:23:29.098915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:45.396 [2024-10-15 09:23:29.099277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:45.396 "name": "raid_bdev1", 00:23:45.396 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:45.396 "strip_size_kb": 64, 00:23:45.396 "state": "online", 00:23:45.396 "raid_level": "raid5f", 00:23:45.396 "superblock": true, 00:23:45.396 "num_base_bdevs": 4, 00:23:45.396 "num_base_bdevs_discovered": 4, 00:23:45.396 "num_base_bdevs_operational": 4, 00:23:45.396 "base_bdevs_list": [ 00:23:45.396 { 00:23:45.396 "name": "spare", 00:23:45.396 "uuid": "e3ff7895-e0b7-589a-89d2-2e972ebcc411", 00:23:45.396 "is_configured": true, 00:23:45.396 "data_offset": 2048, 00:23:45.396 "data_size": 63488 00:23:45.396 }, 00:23:45.396 { 00:23:45.396 "name": "BaseBdev2", 00:23:45.396 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:45.396 "is_configured": true, 00:23:45.396 "data_offset": 2048, 00:23:45.396 "data_size": 63488 00:23:45.396 }, 00:23:45.396 { 00:23:45.396 "name": "BaseBdev3", 00:23:45.396 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:45.396 "is_configured": true, 00:23:45.396 "data_offset": 2048, 00:23:45.396 "data_size": 63488 00:23:45.396 }, 00:23:45.396 { 00:23:45.396 "name": "BaseBdev4", 00:23:45.396 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:45.396 "is_configured": true, 00:23:45.396 "data_offset": 2048, 00:23:45.396 "data_size": 63488 00:23:45.396 } 00:23:45.396 ] 00:23:45.396 }' 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:45.396 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.965 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:45.965 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:45.965 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:45.965 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:45.965 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:45.965 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.965 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.965 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.965 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.965 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.965 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:45.965 "name": "raid_bdev1", 00:23:45.965 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:45.965 "strip_size_kb": 64, 00:23:45.965 "state": "online", 00:23:45.965 "raid_level": "raid5f", 00:23:45.965 "superblock": true, 00:23:45.965 "num_base_bdevs": 4, 00:23:45.966 "num_base_bdevs_discovered": 4, 00:23:45.966 "num_base_bdevs_operational": 4, 00:23:45.966 "base_bdevs_list": [ 00:23:45.966 { 00:23:45.966 "name": "spare", 00:23:45.966 "uuid": "e3ff7895-e0b7-589a-89d2-2e972ebcc411", 00:23:45.966 "is_configured": true, 00:23:45.966 "data_offset": 2048, 00:23:45.966 "data_size": 63488 00:23:45.966 }, 00:23:45.966 { 00:23:45.966 "name": "BaseBdev2", 00:23:45.966 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:45.966 "is_configured": true, 00:23:45.966 "data_offset": 2048, 00:23:45.966 "data_size": 63488 00:23:45.966 }, 00:23:45.966 { 00:23:45.966 "name": "BaseBdev3", 00:23:45.966 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:45.966 "is_configured": true, 00:23:45.966 "data_offset": 2048, 00:23:45.966 "data_size": 63488 00:23:45.966 }, 00:23:45.966 { 00:23:45.966 "name": "BaseBdev4", 00:23:45.966 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:45.966 "is_configured": true, 00:23:45.966 "data_offset": 2048, 00:23:45.966 "data_size": 63488 00:23:45.966 } 00:23:45.966 ] 00:23:45.966 }' 00:23:45.966 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:45.966 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:45.966 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:45.966 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:45.966 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.966 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:45.966 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.966 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.966 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.966 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:45.966 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:45.966 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.966 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.966 [2024-10-15 09:23:29.887537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:46.247 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.247 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:46.247 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:46.247 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:46.247 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:46.247 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:46.247 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:46.247 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:46.247 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:46.247 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:46.247 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:46.247 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.247 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.247 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.247 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.247 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.247 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:46.247 "name": "raid_bdev1", 00:23:46.247 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:46.247 "strip_size_kb": 64, 00:23:46.247 "state": "online", 00:23:46.247 "raid_level": "raid5f", 00:23:46.247 "superblock": true, 00:23:46.247 "num_base_bdevs": 4, 00:23:46.247 "num_base_bdevs_discovered": 3, 00:23:46.247 "num_base_bdevs_operational": 3, 00:23:46.247 "base_bdevs_list": [ 00:23:46.247 { 00:23:46.247 "name": null, 00:23:46.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.247 "is_configured": false, 00:23:46.247 "data_offset": 0, 00:23:46.247 "data_size": 63488 00:23:46.247 }, 00:23:46.247 { 00:23:46.248 "name": "BaseBdev2", 00:23:46.248 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:46.248 "is_configured": true, 00:23:46.248 "data_offset": 2048, 00:23:46.248 "data_size": 63488 00:23:46.248 }, 00:23:46.248 { 00:23:46.248 "name": "BaseBdev3", 00:23:46.248 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:46.248 "is_configured": true, 00:23:46.248 "data_offset": 2048, 00:23:46.248 "data_size": 63488 00:23:46.248 }, 00:23:46.248 { 00:23:46.248 "name": "BaseBdev4", 00:23:46.248 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:46.248 "is_configured": true, 00:23:46.248 "data_offset": 2048, 00:23:46.248 "data_size": 63488 00:23:46.248 } 00:23:46.248 ] 00:23:46.248 }' 00:23:46.248 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:46.248 09:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.512 09:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:46.512 09:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.512 09:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.512 [2024-10-15 09:23:30.427729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:46.512 [2024-10-15 09:23:30.428058] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:46.512 [2024-10-15 09:23:30.428099] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:46.512 [2024-10-15 09:23:30.428168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:46.771 [2024-10-15 09:23:30.442581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:23:46.771 09:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.771 09:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:46.771 [2024-10-15 09:23:30.451797] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:47.708 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:47.709 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:47.709 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:47.709 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:47.709 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:47.709 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.709 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.709 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.709 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.709 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.709 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:47.709 "name": "raid_bdev1", 00:23:47.709 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:47.709 "strip_size_kb": 64, 00:23:47.709 "state": "online", 00:23:47.709 "raid_level": "raid5f", 00:23:47.709 "superblock": true, 00:23:47.709 "num_base_bdevs": 4, 00:23:47.709 "num_base_bdevs_discovered": 4, 00:23:47.709 "num_base_bdevs_operational": 4, 00:23:47.709 "process": { 00:23:47.709 "type": "rebuild", 00:23:47.709 "target": "spare", 00:23:47.709 "progress": { 00:23:47.709 "blocks": 19200, 00:23:47.709 "percent": 10 00:23:47.709 } 00:23:47.709 }, 00:23:47.709 "base_bdevs_list": [ 00:23:47.709 { 00:23:47.709 "name": "spare", 00:23:47.709 "uuid": "e3ff7895-e0b7-589a-89d2-2e972ebcc411", 00:23:47.709 "is_configured": true, 00:23:47.709 "data_offset": 2048, 00:23:47.709 "data_size": 63488 00:23:47.709 }, 00:23:47.709 { 00:23:47.709 "name": "BaseBdev2", 00:23:47.709 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:47.709 "is_configured": true, 00:23:47.709 "data_offset": 2048, 00:23:47.709 "data_size": 63488 00:23:47.709 }, 00:23:47.709 { 00:23:47.709 "name": "BaseBdev3", 00:23:47.709 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:47.709 "is_configured": true, 00:23:47.709 "data_offset": 2048, 00:23:47.709 "data_size": 63488 00:23:47.709 }, 00:23:47.709 { 00:23:47.709 "name": "BaseBdev4", 00:23:47.709 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:47.709 "is_configured": true, 00:23:47.709 "data_offset": 2048, 00:23:47.709 "data_size": 63488 00:23:47.709 } 00:23:47.709 ] 00:23:47.709 }' 00:23:47.709 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:47.709 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:47.709 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:47.709 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:47.709 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:47.709 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.709 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.709 [2024-10-15 09:23:31.622326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:47.969 [2024-10-15 09:23:31.668250] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:47.969 [2024-10-15 09:23:31.668398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:47.969 [2024-10-15 09:23:31.668427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:47.969 [2024-10-15 09:23:31.668442] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:47.969 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.969 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:47.969 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:47.969 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:47.969 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:47.969 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:47.969 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:47.969 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:47.969 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:47.969 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:47.969 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:47.969 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.969 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.969 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.969 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.969 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.969 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:47.969 "name": "raid_bdev1", 00:23:47.969 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:47.969 "strip_size_kb": 64, 00:23:47.969 "state": "online", 00:23:47.969 "raid_level": "raid5f", 00:23:47.969 "superblock": true, 00:23:47.969 "num_base_bdevs": 4, 00:23:47.969 "num_base_bdevs_discovered": 3, 00:23:47.969 "num_base_bdevs_operational": 3, 00:23:47.970 "base_bdevs_list": [ 00:23:47.970 { 00:23:47.970 "name": null, 00:23:47.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.970 "is_configured": false, 00:23:47.970 "data_offset": 0, 00:23:47.970 "data_size": 63488 00:23:47.970 }, 00:23:47.970 { 00:23:47.970 "name": "BaseBdev2", 00:23:47.970 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:47.970 "is_configured": true, 00:23:47.970 "data_offset": 2048, 00:23:47.970 "data_size": 63488 00:23:47.970 }, 00:23:47.970 { 00:23:47.970 "name": "BaseBdev3", 00:23:47.970 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:47.970 "is_configured": true, 00:23:47.970 "data_offset": 2048, 00:23:47.970 "data_size": 63488 00:23:47.970 }, 00:23:47.970 { 00:23:47.970 "name": "BaseBdev4", 00:23:47.970 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:47.970 "is_configured": true, 00:23:47.970 "data_offset": 2048, 00:23:47.970 "data_size": 63488 00:23:47.970 } 00:23:47.970 ] 00:23:47.970 }' 00:23:47.970 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:47.970 09:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.537 09:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:48.537 09:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.537 09:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.537 [2024-10-15 09:23:32.239189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:48.537 [2024-10-15 09:23:32.239307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:48.537 [2024-10-15 09:23:32.239355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:23:48.537 [2024-10-15 09:23:32.239377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:48.537 [2024-10-15 09:23:32.240068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:48.537 [2024-10-15 09:23:32.240140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:48.537 [2024-10-15 09:23:32.240307] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:48.537 [2024-10-15 09:23:32.240342] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:48.537 [2024-10-15 09:23:32.240359] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:48.537 [2024-10-15 09:23:32.240400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:48.538 [2024-10-15 09:23:32.255068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:23:48.538 spare 00:23:48.538 09:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.538 09:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:48.538 [2024-10-15 09:23:32.264112] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:49.478 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:49.478 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:49.478 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:49.478 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:49.478 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:49.478 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.478 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.478 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.478 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.478 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.478 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:49.478 "name": "raid_bdev1", 00:23:49.478 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:49.479 "strip_size_kb": 64, 00:23:49.479 "state": "online", 00:23:49.479 "raid_level": "raid5f", 00:23:49.479 "superblock": true, 00:23:49.479 "num_base_bdevs": 4, 00:23:49.479 "num_base_bdevs_discovered": 4, 00:23:49.479 "num_base_bdevs_operational": 4, 00:23:49.479 "process": { 00:23:49.479 "type": "rebuild", 00:23:49.479 "target": "spare", 00:23:49.479 "progress": { 00:23:49.479 "blocks": 17280, 00:23:49.479 "percent": 9 00:23:49.479 } 00:23:49.479 }, 00:23:49.479 "base_bdevs_list": [ 00:23:49.479 { 00:23:49.479 "name": "spare", 00:23:49.479 "uuid": "e3ff7895-e0b7-589a-89d2-2e972ebcc411", 00:23:49.479 "is_configured": true, 00:23:49.479 "data_offset": 2048, 00:23:49.479 "data_size": 63488 00:23:49.479 }, 00:23:49.479 { 00:23:49.479 "name": "BaseBdev2", 00:23:49.479 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:49.479 "is_configured": true, 00:23:49.479 "data_offset": 2048, 00:23:49.479 "data_size": 63488 00:23:49.479 }, 00:23:49.479 { 00:23:49.479 "name": "BaseBdev3", 00:23:49.479 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:49.479 "is_configured": true, 00:23:49.479 "data_offset": 2048, 00:23:49.479 "data_size": 63488 00:23:49.479 }, 00:23:49.479 { 00:23:49.479 "name": "BaseBdev4", 00:23:49.479 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:49.479 "is_configured": true, 00:23:49.479 "data_offset": 2048, 00:23:49.479 "data_size": 63488 00:23:49.479 } 00:23:49.479 ] 00:23:49.479 }' 00:23:49.479 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:49.479 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:49.479 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:49.738 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:49.738 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:49.738 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.738 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.738 [2024-10-15 09:23:33.434530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:49.738 [2024-10-15 09:23:33.480127] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:49.738 [2024-10-15 09:23:33.480229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:49.738 [2024-10-15 09:23:33.480278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:49.738 [2024-10-15 09:23:33.480291] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:49.738 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.738 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:49.738 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:49.738 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:49.738 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:49.738 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:49.738 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:49.739 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:49.739 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:49.739 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:49.739 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:49.739 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.739 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.739 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.739 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.739 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.739 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:49.739 "name": "raid_bdev1", 00:23:49.739 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:49.739 "strip_size_kb": 64, 00:23:49.739 "state": "online", 00:23:49.739 "raid_level": "raid5f", 00:23:49.739 "superblock": true, 00:23:49.739 "num_base_bdevs": 4, 00:23:49.739 "num_base_bdevs_discovered": 3, 00:23:49.739 "num_base_bdevs_operational": 3, 00:23:49.739 "base_bdevs_list": [ 00:23:49.739 { 00:23:49.739 "name": null, 00:23:49.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.739 "is_configured": false, 00:23:49.739 "data_offset": 0, 00:23:49.739 "data_size": 63488 00:23:49.739 }, 00:23:49.739 { 00:23:49.739 "name": "BaseBdev2", 00:23:49.739 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:49.739 "is_configured": true, 00:23:49.739 "data_offset": 2048, 00:23:49.739 "data_size": 63488 00:23:49.739 }, 00:23:49.739 { 00:23:49.739 "name": "BaseBdev3", 00:23:49.739 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:49.739 "is_configured": true, 00:23:49.739 "data_offset": 2048, 00:23:49.739 "data_size": 63488 00:23:49.739 }, 00:23:49.739 { 00:23:49.739 "name": "BaseBdev4", 00:23:49.739 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:49.739 "is_configured": true, 00:23:49.739 "data_offset": 2048, 00:23:49.739 "data_size": 63488 00:23:49.739 } 00:23:49.739 ] 00:23:49.739 }' 00:23:49.739 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:49.739 09:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:50.309 "name": "raid_bdev1", 00:23:50.309 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:50.309 "strip_size_kb": 64, 00:23:50.309 "state": "online", 00:23:50.309 "raid_level": "raid5f", 00:23:50.309 "superblock": true, 00:23:50.309 "num_base_bdevs": 4, 00:23:50.309 "num_base_bdevs_discovered": 3, 00:23:50.309 "num_base_bdevs_operational": 3, 00:23:50.309 "base_bdevs_list": [ 00:23:50.309 { 00:23:50.309 "name": null, 00:23:50.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.309 "is_configured": false, 00:23:50.309 "data_offset": 0, 00:23:50.309 "data_size": 63488 00:23:50.309 }, 00:23:50.309 { 00:23:50.309 "name": "BaseBdev2", 00:23:50.309 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:50.309 "is_configured": true, 00:23:50.309 "data_offset": 2048, 00:23:50.309 "data_size": 63488 00:23:50.309 }, 00:23:50.309 { 00:23:50.309 "name": "BaseBdev3", 00:23:50.309 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:50.309 "is_configured": true, 00:23:50.309 "data_offset": 2048, 00:23:50.309 "data_size": 63488 00:23:50.309 }, 00:23:50.309 { 00:23:50.309 "name": "BaseBdev4", 00:23:50.309 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:50.309 "is_configured": true, 00:23:50.309 "data_offset": 2048, 00:23:50.309 "data_size": 63488 00:23:50.309 } 00:23:50.309 ] 00:23:50.309 }' 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.309 [2024-10-15 09:23:34.206739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:50.309 [2024-10-15 09:23:34.206841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.309 [2024-10-15 09:23:34.206878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:23:50.309 [2024-10-15 09:23:34.206894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.309 [2024-10-15 09:23:34.207594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.309 [2024-10-15 09:23:34.207629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:50.309 [2024-10-15 09:23:34.207763] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:50.309 [2024-10-15 09:23:34.207794] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:50.309 [2024-10-15 09:23:34.207811] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:50.309 [2024-10-15 09:23:34.207826] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:50.309 BaseBdev1 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.309 09:23:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:51.688 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:51.688 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:51.688 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:51.688 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:51.688 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:51.688 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:51.688 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:51.688 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:51.688 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:51.688 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:51.688 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.688 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.688 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.688 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:51.688 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.688 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:51.688 "name": "raid_bdev1", 00:23:51.688 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:51.688 "strip_size_kb": 64, 00:23:51.688 "state": "online", 00:23:51.688 "raid_level": "raid5f", 00:23:51.688 "superblock": true, 00:23:51.688 "num_base_bdevs": 4, 00:23:51.688 "num_base_bdevs_discovered": 3, 00:23:51.688 "num_base_bdevs_operational": 3, 00:23:51.688 "base_bdevs_list": [ 00:23:51.688 { 00:23:51.688 "name": null, 00:23:51.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.688 "is_configured": false, 00:23:51.688 "data_offset": 0, 00:23:51.688 "data_size": 63488 00:23:51.688 }, 00:23:51.688 { 00:23:51.688 "name": "BaseBdev2", 00:23:51.688 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:51.688 "is_configured": true, 00:23:51.688 "data_offset": 2048, 00:23:51.688 "data_size": 63488 00:23:51.688 }, 00:23:51.688 { 00:23:51.688 "name": "BaseBdev3", 00:23:51.688 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:51.688 "is_configured": true, 00:23:51.688 "data_offset": 2048, 00:23:51.688 "data_size": 63488 00:23:51.688 }, 00:23:51.688 { 00:23:51.688 "name": "BaseBdev4", 00:23:51.688 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:51.688 "is_configured": true, 00:23:51.688 "data_offset": 2048, 00:23:51.688 "data_size": 63488 00:23:51.688 } 00:23:51.688 ] 00:23:51.688 }' 00:23:51.688 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:51.688 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:51.948 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:51.948 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:51.948 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:51.948 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:51.948 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:51.948 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.948 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.948 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.948 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:51.948 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.948 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:51.948 "name": "raid_bdev1", 00:23:51.948 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:51.948 "strip_size_kb": 64, 00:23:51.948 "state": "online", 00:23:51.948 "raid_level": "raid5f", 00:23:51.948 "superblock": true, 00:23:51.948 "num_base_bdevs": 4, 00:23:51.948 "num_base_bdevs_discovered": 3, 00:23:51.948 "num_base_bdevs_operational": 3, 00:23:51.948 "base_bdevs_list": [ 00:23:51.948 { 00:23:51.948 "name": null, 00:23:51.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.948 "is_configured": false, 00:23:51.948 "data_offset": 0, 00:23:51.948 "data_size": 63488 00:23:51.948 }, 00:23:51.948 { 00:23:51.948 "name": "BaseBdev2", 00:23:51.948 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:51.948 "is_configured": true, 00:23:51.948 "data_offset": 2048, 00:23:51.948 "data_size": 63488 00:23:51.948 }, 00:23:51.948 { 00:23:51.948 "name": "BaseBdev3", 00:23:51.948 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:51.948 "is_configured": true, 00:23:51.948 "data_offset": 2048, 00:23:51.948 "data_size": 63488 00:23:51.948 }, 00:23:51.948 { 00:23:51.948 "name": "BaseBdev4", 00:23:51.948 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:51.948 "is_configured": true, 00:23:51.948 "data_offset": 2048, 00:23:51.948 "data_size": 63488 00:23:51.948 } 00:23:51.948 ] 00:23:51.948 }' 00:23:51.948 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:51.948 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:51.948 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:52.207 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:52.207 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:52.207 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:23:52.207 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:52.207 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:52.207 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.207 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:52.207 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.207 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:52.207 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.207 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:52.207 [2024-10-15 09:23:35.919399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:52.207 [2024-10-15 09:23:35.919681] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:52.207 [2024-10-15 09:23:35.919712] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:52.207 request: 00:23:52.207 { 00:23:52.207 "base_bdev": "BaseBdev1", 00:23:52.207 "raid_bdev": "raid_bdev1", 00:23:52.207 "method": "bdev_raid_add_base_bdev", 00:23:52.207 "req_id": 1 00:23:52.208 } 00:23:52.208 Got JSON-RPC error response 00:23:52.208 response: 00:23:52.208 { 00:23:52.208 "code": -22, 00:23:52.208 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:52.208 } 00:23:52.208 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:52.208 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:23:52.208 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:52.208 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:52.208 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:52.208 09:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:53.144 09:23:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:53.144 09:23:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:53.144 09:23:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:53.144 09:23:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:53.144 09:23:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:53.144 09:23:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:53.144 09:23:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:53.144 09:23:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:53.144 09:23:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:53.144 09:23:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:53.144 09:23:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.144 09:23:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.144 09:23:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:53.144 09:23:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.144 09:23:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.144 09:23:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:53.144 "name": "raid_bdev1", 00:23:53.144 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:53.144 "strip_size_kb": 64, 00:23:53.144 "state": "online", 00:23:53.144 "raid_level": "raid5f", 00:23:53.144 "superblock": true, 00:23:53.144 "num_base_bdevs": 4, 00:23:53.144 "num_base_bdevs_discovered": 3, 00:23:53.144 "num_base_bdevs_operational": 3, 00:23:53.144 "base_bdevs_list": [ 00:23:53.144 { 00:23:53.144 "name": null, 00:23:53.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.144 "is_configured": false, 00:23:53.144 "data_offset": 0, 00:23:53.144 "data_size": 63488 00:23:53.144 }, 00:23:53.144 { 00:23:53.144 "name": "BaseBdev2", 00:23:53.144 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:53.144 "is_configured": true, 00:23:53.144 "data_offset": 2048, 00:23:53.144 "data_size": 63488 00:23:53.144 }, 00:23:53.144 { 00:23:53.144 "name": "BaseBdev3", 00:23:53.144 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:53.144 "is_configured": true, 00:23:53.144 "data_offset": 2048, 00:23:53.144 "data_size": 63488 00:23:53.144 }, 00:23:53.144 { 00:23:53.144 "name": "BaseBdev4", 00:23:53.144 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:53.144 "is_configured": true, 00:23:53.144 "data_offset": 2048, 00:23:53.144 "data_size": 63488 00:23:53.144 } 00:23:53.144 ] 00:23:53.144 }' 00:23:53.144 09:23:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:53.144 09:23:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:53.711 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:53.711 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:53.711 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:53.712 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:53.712 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:53.712 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.712 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.712 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.712 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:53.712 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.712 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:53.712 "name": "raid_bdev1", 00:23:53.712 "uuid": "3af8ec86-64ce-4172-8f16-de823abac1e8", 00:23:53.712 "strip_size_kb": 64, 00:23:53.712 "state": "online", 00:23:53.712 "raid_level": "raid5f", 00:23:53.712 "superblock": true, 00:23:53.712 "num_base_bdevs": 4, 00:23:53.712 "num_base_bdevs_discovered": 3, 00:23:53.712 "num_base_bdevs_operational": 3, 00:23:53.712 "base_bdevs_list": [ 00:23:53.712 { 00:23:53.712 "name": null, 00:23:53.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.712 "is_configured": false, 00:23:53.712 "data_offset": 0, 00:23:53.712 "data_size": 63488 00:23:53.712 }, 00:23:53.712 { 00:23:53.712 "name": "BaseBdev2", 00:23:53.712 "uuid": "e57e7329-c23a-5628-aa92-d84e80dba3e7", 00:23:53.712 "is_configured": true, 00:23:53.712 "data_offset": 2048, 00:23:53.712 "data_size": 63488 00:23:53.712 }, 00:23:53.712 { 00:23:53.712 "name": "BaseBdev3", 00:23:53.712 "uuid": "c1322706-7b86-59b2-8a5e-63ac031972ae", 00:23:53.712 "is_configured": true, 00:23:53.712 "data_offset": 2048, 00:23:53.712 "data_size": 63488 00:23:53.712 }, 00:23:53.712 { 00:23:53.712 "name": "BaseBdev4", 00:23:53.712 "uuid": "81aaaf3d-1182-5480-84d0-7b1aab2e032c", 00:23:53.712 "is_configured": true, 00:23:53.712 "data_offset": 2048, 00:23:53.712 "data_size": 63488 00:23:53.712 } 00:23:53.712 ] 00:23:53.712 }' 00:23:53.712 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:53.712 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:53.712 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:53.712 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:53.712 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85829 00:23:53.712 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 85829 ']' 00:23:53.712 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 85829 00:23:53.712 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:23:53.712 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:53.712 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85829 00:23:53.970 killing process with pid 85829 00:23:53.970 Received shutdown signal, test time was about 60.000000 seconds 00:23:53.970 00:23:53.970 Latency(us) 00:23:53.970 [2024-10-15T09:23:37.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.970 [2024-10-15T09:23:37.898Z] =================================================================================================================== 00:23:53.970 [2024-10-15T09:23:37.898Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:53.970 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:53.970 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:53.970 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85829' 00:23:53.970 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 85829 00:23:53.970 [2024-10-15 09:23:37.642866] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:53.970 09:23:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 85829 00:23:53.970 [2024-10-15 09:23:37.643064] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:53.970 [2024-10-15 09:23:37.643213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:53.970 [2024-10-15 09:23:37.643238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:54.229 [2024-10-15 09:23:38.122975] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:55.606 09:23:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:23:55.606 00:23:55.606 real 0m28.987s 00:23:55.606 user 0m37.672s 00:23:55.606 sys 0m3.017s 00:23:55.606 09:23:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:55.606 09:23:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.606 ************************************ 00:23:55.606 END TEST raid5f_rebuild_test_sb 00:23:55.606 ************************************ 00:23:55.606 09:23:39 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:23:55.606 09:23:39 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:23:55.606 09:23:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:55.606 09:23:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:55.606 09:23:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:55.606 ************************************ 00:23:55.606 START TEST raid_state_function_test_sb_4k 00:23:55.606 ************************************ 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:55.606 Process raid pid: 86651 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86651 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86651' 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86651 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 86651 ']' 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:55.606 09:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:55.606 [2024-10-15 09:23:39.422436] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:23:55.606 [2024-10-15 09:23:39.422836] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.865 [2024-10-15 09:23:39.606823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.865 [2024-10-15 09:23:39.779470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.124 [2024-10-15 09:23:40.010057] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:56.124 [2024-10-15 09:23:40.010139] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:56.691 [2024-10-15 09:23:40.393424] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:56.691 [2024-10-15 09:23:40.393641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:56.691 [2024-10-15 09:23:40.393673] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:56.691 [2024-10-15 09:23:40.393693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:56.691 "name": "Existed_Raid", 00:23:56.691 "uuid": "228d31ba-26e3-4cb9-8435-fe767f04cc31", 00:23:56.691 "strip_size_kb": 0, 00:23:56.691 "state": "configuring", 00:23:56.691 "raid_level": "raid1", 00:23:56.691 "superblock": true, 00:23:56.691 "num_base_bdevs": 2, 00:23:56.691 "num_base_bdevs_discovered": 0, 00:23:56.691 "num_base_bdevs_operational": 2, 00:23:56.691 "base_bdevs_list": [ 00:23:56.691 { 00:23:56.691 "name": "BaseBdev1", 00:23:56.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:56.691 "is_configured": false, 00:23:56.691 "data_offset": 0, 00:23:56.691 "data_size": 0 00:23:56.691 }, 00:23:56.691 { 00:23:56.691 "name": "BaseBdev2", 00:23:56.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:56.691 "is_configured": false, 00:23:56.691 "data_offset": 0, 00:23:56.691 "data_size": 0 00:23:56.691 } 00:23:56.691 ] 00:23:56.691 }' 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:56.691 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:57.258 [2024-10-15 09:23:40.925452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:57.258 [2024-10-15 09:23:40.925633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:57.258 [2024-10-15 09:23:40.933463] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:57.258 [2024-10-15 09:23:40.933518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:57.258 [2024-10-15 09:23:40.933535] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:57.258 [2024-10-15 09:23:40.933556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:57.258 [2024-10-15 09:23:40.983096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:57.258 BaseBdev1 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.258 09:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:57.258 [ 00:23:57.258 { 00:23:57.258 "name": "BaseBdev1", 00:23:57.258 "aliases": [ 00:23:57.258 "8545410c-559a-4002-b12c-2b711ea4259a" 00:23:57.258 ], 00:23:57.258 "product_name": "Malloc disk", 00:23:57.258 "block_size": 4096, 00:23:57.258 "num_blocks": 8192, 00:23:57.258 "uuid": "8545410c-559a-4002-b12c-2b711ea4259a", 00:23:57.258 "assigned_rate_limits": { 00:23:57.258 "rw_ios_per_sec": 0, 00:23:57.258 "rw_mbytes_per_sec": 0, 00:23:57.258 "r_mbytes_per_sec": 0, 00:23:57.258 "w_mbytes_per_sec": 0 00:23:57.258 }, 00:23:57.258 "claimed": true, 00:23:57.258 "claim_type": "exclusive_write", 00:23:57.258 "zoned": false, 00:23:57.258 "supported_io_types": { 00:23:57.258 "read": true, 00:23:57.258 "write": true, 00:23:57.258 "unmap": true, 00:23:57.258 "flush": true, 00:23:57.258 "reset": true, 00:23:57.258 "nvme_admin": false, 00:23:57.258 "nvme_io": false, 00:23:57.258 "nvme_io_md": false, 00:23:57.258 "write_zeroes": true, 00:23:57.258 "zcopy": true, 00:23:57.258 "get_zone_info": false, 00:23:57.258 "zone_management": false, 00:23:57.258 "zone_append": false, 00:23:57.258 "compare": false, 00:23:57.258 "compare_and_write": false, 00:23:57.258 "abort": true, 00:23:57.258 "seek_hole": false, 00:23:57.258 "seek_data": false, 00:23:57.258 "copy": true, 00:23:57.258 "nvme_iov_md": false 00:23:57.258 }, 00:23:57.258 "memory_domains": [ 00:23:57.258 { 00:23:57.258 "dma_device_id": "system", 00:23:57.258 "dma_device_type": 1 00:23:57.258 }, 00:23:57.258 { 00:23:57.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.258 "dma_device_type": 2 00:23:57.258 } 00:23:57.258 ], 00:23:57.258 "driver_specific": {} 00:23:57.258 } 00:23:57.258 ] 00:23:57.258 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.258 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:23:57.258 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:57.258 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:57.258 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:57.258 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:57.258 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:57.258 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:57.258 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:57.258 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:57.258 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:57.258 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:57.258 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.258 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.258 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:57.258 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:57.259 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.259 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:57.259 "name": "Existed_Raid", 00:23:57.259 "uuid": "0bf78240-b29e-4c6d-9e23-7aaba8aede4d", 00:23:57.259 "strip_size_kb": 0, 00:23:57.259 "state": "configuring", 00:23:57.259 "raid_level": "raid1", 00:23:57.259 "superblock": true, 00:23:57.259 "num_base_bdevs": 2, 00:23:57.259 "num_base_bdevs_discovered": 1, 00:23:57.259 "num_base_bdevs_operational": 2, 00:23:57.259 "base_bdevs_list": [ 00:23:57.259 { 00:23:57.259 "name": "BaseBdev1", 00:23:57.259 "uuid": "8545410c-559a-4002-b12c-2b711ea4259a", 00:23:57.259 "is_configured": true, 00:23:57.259 "data_offset": 256, 00:23:57.259 "data_size": 7936 00:23:57.259 }, 00:23:57.259 { 00:23:57.259 "name": "BaseBdev2", 00:23:57.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.259 "is_configured": false, 00:23:57.259 "data_offset": 0, 00:23:57.259 "data_size": 0 00:23:57.259 } 00:23:57.259 ] 00:23:57.259 }' 00:23:57.259 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:57.259 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:57.826 [2024-10-15 09:23:41.527328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:57.826 [2024-10-15 09:23:41.527403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:57.826 [2024-10-15 09:23:41.535376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:57.826 [2024-10-15 09:23:41.538165] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:57.826 [2024-10-15 09:23:41.538213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:57.826 "name": "Existed_Raid", 00:23:57.826 "uuid": "4e83b411-2009-4caf-a643-be91b7bcbd40", 00:23:57.826 "strip_size_kb": 0, 00:23:57.826 "state": "configuring", 00:23:57.826 "raid_level": "raid1", 00:23:57.826 "superblock": true, 00:23:57.826 "num_base_bdevs": 2, 00:23:57.826 "num_base_bdevs_discovered": 1, 00:23:57.826 "num_base_bdevs_operational": 2, 00:23:57.826 "base_bdevs_list": [ 00:23:57.826 { 00:23:57.826 "name": "BaseBdev1", 00:23:57.826 "uuid": "8545410c-559a-4002-b12c-2b711ea4259a", 00:23:57.826 "is_configured": true, 00:23:57.826 "data_offset": 256, 00:23:57.826 "data_size": 7936 00:23:57.826 }, 00:23:57.826 { 00:23:57.826 "name": "BaseBdev2", 00:23:57.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.826 "is_configured": false, 00:23:57.826 "data_offset": 0, 00:23:57.826 "data_size": 0 00:23:57.826 } 00:23:57.826 ] 00:23:57.826 }' 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:57.826 09:23:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:58.394 [2024-10-15 09:23:42.099499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:58.394 [2024-10-15 09:23:42.100194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:58.394 [2024-10-15 09:23:42.100220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:58.394 BaseBdev2 00:23:58.394 [2024-10-15 09:23:42.100573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:58.394 [2024-10-15 09:23:42.100811] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:58.394 [2024-10-15 09:23:42.100834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:58.394 [2024-10-15 09:23:42.101019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:58.394 [ 00:23:58.394 { 00:23:58.394 "name": "BaseBdev2", 00:23:58.394 "aliases": [ 00:23:58.394 "ab539364-1254-44a1-8a38-aeba32ffc9f2" 00:23:58.394 ], 00:23:58.394 "product_name": "Malloc disk", 00:23:58.394 "block_size": 4096, 00:23:58.394 "num_blocks": 8192, 00:23:58.394 "uuid": "ab539364-1254-44a1-8a38-aeba32ffc9f2", 00:23:58.394 "assigned_rate_limits": { 00:23:58.394 "rw_ios_per_sec": 0, 00:23:58.394 "rw_mbytes_per_sec": 0, 00:23:58.394 "r_mbytes_per_sec": 0, 00:23:58.394 "w_mbytes_per_sec": 0 00:23:58.394 }, 00:23:58.394 "claimed": true, 00:23:58.394 "claim_type": "exclusive_write", 00:23:58.394 "zoned": false, 00:23:58.394 "supported_io_types": { 00:23:58.394 "read": true, 00:23:58.394 "write": true, 00:23:58.394 "unmap": true, 00:23:58.394 "flush": true, 00:23:58.394 "reset": true, 00:23:58.394 "nvme_admin": false, 00:23:58.394 "nvme_io": false, 00:23:58.394 "nvme_io_md": false, 00:23:58.394 "write_zeroes": true, 00:23:58.394 "zcopy": true, 00:23:58.394 "get_zone_info": false, 00:23:58.394 "zone_management": false, 00:23:58.394 "zone_append": false, 00:23:58.394 "compare": false, 00:23:58.394 "compare_and_write": false, 00:23:58.394 "abort": true, 00:23:58.394 "seek_hole": false, 00:23:58.394 "seek_data": false, 00:23:58.394 "copy": true, 00:23:58.394 "nvme_iov_md": false 00:23:58.394 }, 00:23:58.394 "memory_domains": [ 00:23:58.394 { 00:23:58.394 "dma_device_id": "system", 00:23:58.394 "dma_device_type": 1 00:23:58.394 }, 00:23:58.394 { 00:23:58.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.394 "dma_device_type": 2 00:23:58.394 } 00:23:58.394 ], 00:23:58.394 "driver_specific": {} 00:23:58.394 } 00:23:58.394 ] 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.394 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:58.394 "name": "Existed_Raid", 00:23:58.394 "uuid": "4e83b411-2009-4caf-a643-be91b7bcbd40", 00:23:58.394 "strip_size_kb": 0, 00:23:58.394 "state": "online", 00:23:58.394 "raid_level": "raid1", 00:23:58.394 "superblock": true, 00:23:58.395 "num_base_bdevs": 2, 00:23:58.395 "num_base_bdevs_discovered": 2, 00:23:58.395 "num_base_bdevs_operational": 2, 00:23:58.395 "base_bdevs_list": [ 00:23:58.395 { 00:23:58.395 "name": "BaseBdev1", 00:23:58.395 "uuid": "8545410c-559a-4002-b12c-2b711ea4259a", 00:23:58.395 "is_configured": true, 00:23:58.395 "data_offset": 256, 00:23:58.395 "data_size": 7936 00:23:58.395 }, 00:23:58.395 { 00:23:58.395 "name": "BaseBdev2", 00:23:58.395 "uuid": "ab539364-1254-44a1-8a38-aeba32ffc9f2", 00:23:58.395 "is_configured": true, 00:23:58.395 "data_offset": 256, 00:23:58.395 "data_size": 7936 00:23:58.395 } 00:23:58.395 ] 00:23:58.395 }' 00:23:58.395 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:58.395 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:58.962 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:58.962 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:58.962 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:58.962 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:58.962 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:23:58.962 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:58.962 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:58.962 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.962 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:58.962 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:58.962 [2024-10-15 09:23:42.640078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:58.962 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.962 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:58.962 "name": "Existed_Raid", 00:23:58.962 "aliases": [ 00:23:58.962 "4e83b411-2009-4caf-a643-be91b7bcbd40" 00:23:58.962 ], 00:23:58.962 "product_name": "Raid Volume", 00:23:58.962 "block_size": 4096, 00:23:58.962 "num_blocks": 7936, 00:23:58.962 "uuid": "4e83b411-2009-4caf-a643-be91b7bcbd40", 00:23:58.962 "assigned_rate_limits": { 00:23:58.962 "rw_ios_per_sec": 0, 00:23:58.962 "rw_mbytes_per_sec": 0, 00:23:58.962 "r_mbytes_per_sec": 0, 00:23:58.963 "w_mbytes_per_sec": 0 00:23:58.963 }, 00:23:58.963 "claimed": false, 00:23:58.963 "zoned": false, 00:23:58.963 "supported_io_types": { 00:23:58.963 "read": true, 00:23:58.963 "write": true, 00:23:58.963 "unmap": false, 00:23:58.963 "flush": false, 00:23:58.963 "reset": true, 00:23:58.963 "nvme_admin": false, 00:23:58.963 "nvme_io": false, 00:23:58.963 "nvme_io_md": false, 00:23:58.963 "write_zeroes": true, 00:23:58.963 "zcopy": false, 00:23:58.963 "get_zone_info": false, 00:23:58.963 "zone_management": false, 00:23:58.963 "zone_append": false, 00:23:58.963 "compare": false, 00:23:58.963 "compare_and_write": false, 00:23:58.963 "abort": false, 00:23:58.963 "seek_hole": false, 00:23:58.963 "seek_data": false, 00:23:58.963 "copy": false, 00:23:58.963 "nvme_iov_md": false 00:23:58.963 }, 00:23:58.963 "memory_domains": [ 00:23:58.963 { 00:23:58.963 "dma_device_id": "system", 00:23:58.963 "dma_device_type": 1 00:23:58.963 }, 00:23:58.963 { 00:23:58.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.963 "dma_device_type": 2 00:23:58.963 }, 00:23:58.963 { 00:23:58.963 "dma_device_id": "system", 00:23:58.963 "dma_device_type": 1 00:23:58.963 }, 00:23:58.963 { 00:23:58.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.963 "dma_device_type": 2 00:23:58.963 } 00:23:58.963 ], 00:23:58.963 "driver_specific": { 00:23:58.963 "raid": { 00:23:58.963 "uuid": "4e83b411-2009-4caf-a643-be91b7bcbd40", 00:23:58.963 "strip_size_kb": 0, 00:23:58.963 "state": "online", 00:23:58.963 "raid_level": "raid1", 00:23:58.963 "superblock": true, 00:23:58.963 "num_base_bdevs": 2, 00:23:58.963 "num_base_bdevs_discovered": 2, 00:23:58.963 "num_base_bdevs_operational": 2, 00:23:58.963 "base_bdevs_list": [ 00:23:58.963 { 00:23:58.963 "name": "BaseBdev1", 00:23:58.963 "uuid": "8545410c-559a-4002-b12c-2b711ea4259a", 00:23:58.963 "is_configured": true, 00:23:58.963 "data_offset": 256, 00:23:58.963 "data_size": 7936 00:23:58.963 }, 00:23:58.963 { 00:23:58.963 "name": "BaseBdev2", 00:23:58.963 "uuid": "ab539364-1254-44a1-8a38-aeba32ffc9f2", 00:23:58.963 "is_configured": true, 00:23:58.963 "data_offset": 256, 00:23:58.963 "data_size": 7936 00:23:58.963 } 00:23:58.963 ] 00:23:58.963 } 00:23:58.963 } 00:23:58.963 }' 00:23:58.963 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:58.963 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:58.963 BaseBdev2' 00:23:58.963 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:58.963 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:23:58.963 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:58.963 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:58.963 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.963 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:58.963 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:58.963 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.963 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:23:58.963 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:23:58.963 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:58.963 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:58.963 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.963 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:58.963 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:58.963 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.222 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:23:59.222 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:23:59.222 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:59.222 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.222 09:23:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:59.222 [2024-10-15 09:23:42.911898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:59.222 "name": "Existed_Raid", 00:23:59.222 "uuid": "4e83b411-2009-4caf-a643-be91b7bcbd40", 00:23:59.222 "strip_size_kb": 0, 00:23:59.222 "state": "online", 00:23:59.222 "raid_level": "raid1", 00:23:59.222 "superblock": true, 00:23:59.222 "num_base_bdevs": 2, 00:23:59.222 "num_base_bdevs_discovered": 1, 00:23:59.222 "num_base_bdevs_operational": 1, 00:23:59.222 "base_bdevs_list": [ 00:23:59.222 { 00:23:59.222 "name": null, 00:23:59.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.222 "is_configured": false, 00:23:59.222 "data_offset": 0, 00:23:59.222 "data_size": 7936 00:23:59.222 }, 00:23:59.222 { 00:23:59.222 "name": "BaseBdev2", 00:23:59.222 "uuid": "ab539364-1254-44a1-8a38-aeba32ffc9f2", 00:23:59.222 "is_configured": true, 00:23:59.222 "data_offset": 256, 00:23:59.222 "data_size": 7936 00:23:59.222 } 00:23:59.222 ] 00:23:59.222 }' 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:59.222 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:59.790 [2024-10-15 09:23:43.610346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:59.790 [2024-10-15 09:23:43.610538] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:59.790 [2024-10-15 09:23:43.706335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:59.790 [2024-10-15 09:23:43.706699] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:59.790 [2024-10-15 09:23:43.706858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.790 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:00.102 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.102 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:00.102 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:00.102 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:00.102 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86651 00:24:00.102 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 86651 ']' 00:24:00.102 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 86651 00:24:00.102 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:24:00.102 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:00.102 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86651 00:24:00.102 killing process with pid 86651 00:24:00.102 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:00.102 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:00.102 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86651' 00:24:00.102 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 86651 00:24:00.102 [2024-10-15 09:23:43.801103] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:00.102 09:23:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 86651 00:24:00.102 [2024-10-15 09:23:43.817297] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:01.041 09:23:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:24:01.041 00:24:01.041 real 0m5.634s 00:24:01.041 user 0m8.342s 00:24:01.041 sys 0m0.915s 00:24:01.041 09:23:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:01.041 09:23:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:01.041 ************************************ 00:24:01.041 END TEST raid_state_function_test_sb_4k 00:24:01.041 ************************************ 00:24:01.301 09:23:44 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:24:01.301 09:23:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:24:01.301 09:23:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:01.301 09:23:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:01.301 ************************************ 00:24:01.301 START TEST raid_superblock_test_4k 00:24:01.301 ************************************ 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:24:01.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86907 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86907 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 86907 ']' 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:01.301 09:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:01.301 [2024-10-15 09:23:45.143662] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:24:01.301 [2024-10-15 09:23:45.144130] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86907 ] 00:24:01.560 [2024-10-15 09:23:45.327716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.818 [2024-10-15 09:23:45.497435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.818 [2024-10-15 09:23:45.734396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:01.818 [2024-10-15 09:23:45.734485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:02.386 malloc1 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:02.386 [2024-10-15 09:23:46.176397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:02.386 [2024-10-15 09:23:46.176488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:02.386 [2024-10-15 09:23:46.176521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:02.386 [2024-10-15 09:23:46.176537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:02.386 [2024-10-15 09:23:46.179522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:02.386 [2024-10-15 09:23:46.179566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:02.386 pt1 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:02.386 malloc2 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:02.386 [2024-10-15 09:23:46.237288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:02.386 [2024-10-15 09:23:46.237365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:02.386 [2024-10-15 09:23:46.237396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:02.386 [2024-10-15 09:23:46.237411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:02.386 [2024-10-15 09:23:46.240303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:02.386 [2024-10-15 09:23:46.240347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:02.386 pt2 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:02.386 [2024-10-15 09:23:46.245336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:02.386 [2024-10-15 09:23:46.247872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:02.386 [2024-10-15 09:23:46.248283] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:02.386 [2024-10-15 09:23:46.248310] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:02.386 [2024-10-15 09:23:46.248618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:02.386 [2024-10-15 09:23:46.248826] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:02.386 [2024-10-15 09:23:46.248846] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:02.386 [2024-10-15 09:23:46.249032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.386 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:02.386 "name": "raid_bdev1", 00:24:02.386 "uuid": "1707f5b2-69d1-47df-9ec7-a54483734d7e", 00:24:02.386 "strip_size_kb": 0, 00:24:02.386 "state": "online", 00:24:02.386 "raid_level": "raid1", 00:24:02.386 "superblock": true, 00:24:02.386 "num_base_bdevs": 2, 00:24:02.386 "num_base_bdevs_discovered": 2, 00:24:02.386 "num_base_bdevs_operational": 2, 00:24:02.386 "base_bdevs_list": [ 00:24:02.386 { 00:24:02.386 "name": "pt1", 00:24:02.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:02.386 "is_configured": true, 00:24:02.386 "data_offset": 256, 00:24:02.386 "data_size": 7936 00:24:02.386 }, 00:24:02.386 { 00:24:02.386 "name": "pt2", 00:24:02.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:02.387 "is_configured": true, 00:24:02.387 "data_offset": 256, 00:24:02.387 "data_size": 7936 00:24:02.387 } 00:24:02.387 ] 00:24:02.387 }' 00:24:02.387 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:02.387 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:02.954 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:02.954 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:02.954 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:02.954 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:02.954 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:24:02.954 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:02.954 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:02.954 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:02.954 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.954 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:02.954 [2024-10-15 09:23:46.805873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:02.954 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.954 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:02.954 "name": "raid_bdev1", 00:24:02.954 "aliases": [ 00:24:02.954 "1707f5b2-69d1-47df-9ec7-a54483734d7e" 00:24:02.954 ], 00:24:02.954 "product_name": "Raid Volume", 00:24:02.954 "block_size": 4096, 00:24:02.954 "num_blocks": 7936, 00:24:02.954 "uuid": "1707f5b2-69d1-47df-9ec7-a54483734d7e", 00:24:02.954 "assigned_rate_limits": { 00:24:02.954 "rw_ios_per_sec": 0, 00:24:02.954 "rw_mbytes_per_sec": 0, 00:24:02.954 "r_mbytes_per_sec": 0, 00:24:02.954 "w_mbytes_per_sec": 0 00:24:02.954 }, 00:24:02.954 "claimed": false, 00:24:02.954 "zoned": false, 00:24:02.954 "supported_io_types": { 00:24:02.954 "read": true, 00:24:02.954 "write": true, 00:24:02.954 "unmap": false, 00:24:02.954 "flush": false, 00:24:02.954 "reset": true, 00:24:02.954 "nvme_admin": false, 00:24:02.954 "nvme_io": false, 00:24:02.954 "nvme_io_md": false, 00:24:02.954 "write_zeroes": true, 00:24:02.954 "zcopy": false, 00:24:02.954 "get_zone_info": false, 00:24:02.954 "zone_management": false, 00:24:02.954 "zone_append": false, 00:24:02.954 "compare": false, 00:24:02.954 "compare_and_write": false, 00:24:02.954 "abort": false, 00:24:02.954 "seek_hole": false, 00:24:02.954 "seek_data": false, 00:24:02.954 "copy": false, 00:24:02.954 "nvme_iov_md": false 00:24:02.954 }, 00:24:02.954 "memory_domains": [ 00:24:02.954 { 00:24:02.954 "dma_device_id": "system", 00:24:02.954 "dma_device_type": 1 00:24:02.954 }, 00:24:02.954 { 00:24:02.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:02.954 "dma_device_type": 2 00:24:02.954 }, 00:24:02.954 { 00:24:02.954 "dma_device_id": "system", 00:24:02.954 "dma_device_type": 1 00:24:02.954 }, 00:24:02.954 { 00:24:02.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:02.954 "dma_device_type": 2 00:24:02.954 } 00:24:02.954 ], 00:24:02.954 "driver_specific": { 00:24:02.954 "raid": { 00:24:02.955 "uuid": "1707f5b2-69d1-47df-9ec7-a54483734d7e", 00:24:02.955 "strip_size_kb": 0, 00:24:02.955 "state": "online", 00:24:02.955 "raid_level": "raid1", 00:24:02.955 "superblock": true, 00:24:02.955 "num_base_bdevs": 2, 00:24:02.955 "num_base_bdevs_discovered": 2, 00:24:02.955 "num_base_bdevs_operational": 2, 00:24:02.955 "base_bdevs_list": [ 00:24:02.955 { 00:24:02.955 "name": "pt1", 00:24:02.955 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:02.955 "is_configured": true, 00:24:02.955 "data_offset": 256, 00:24:02.955 "data_size": 7936 00:24:02.955 }, 00:24:02.955 { 00:24:02.955 "name": "pt2", 00:24:02.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:02.955 "is_configured": true, 00:24:02.955 "data_offset": 256, 00:24:02.955 "data_size": 7936 00:24:02.955 } 00:24:02.955 ] 00:24:02.955 } 00:24:02.955 } 00:24:02.955 }' 00:24:02.955 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:03.213 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:03.213 pt2' 00:24:03.213 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:03.213 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:24:03.213 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:03.213 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:03.213 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.213 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:03.213 09:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:03.213 09:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.213 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:03.213 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:03.213 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:03.213 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:03.213 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.213 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:03.213 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:03.213 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.214 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:03.214 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:03.214 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:03.214 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:03.214 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.214 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:03.214 [2024-10-15 09:23:47.085800] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:03.214 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.214 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1707f5b2-69d1-47df-9ec7-a54483734d7e 00:24:03.214 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 1707f5b2-69d1-47df-9ec7-a54483734d7e ']' 00:24:03.214 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:03.214 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.214 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:03.214 [2024-10-15 09:23:47.133498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:03.214 [2024-10-15 09:23:47.133527] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:03.214 [2024-10-15 09:23:47.133637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:03.214 [2024-10-15 09:23:47.133721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:03.214 [2024-10-15 09:23:47.133740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:03.214 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:03.473 [2024-10-15 09:23:47.273604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:03.473 [2024-10-15 09:23:47.276253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:03.473 [2024-10-15 09:23:47.276362] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:03.473 [2024-10-15 09:23:47.276448] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:03.473 [2024-10-15 09:23:47.276476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:03.473 [2024-10-15 09:23:47.276492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:24:03.473 request: 00:24:03.473 { 00:24:03.473 "name": "raid_bdev1", 00:24:03.473 "raid_level": "raid1", 00:24:03.473 "base_bdevs": [ 00:24:03.473 "malloc1", 00:24:03.473 "malloc2" 00:24:03.473 ], 00:24:03.473 "superblock": false, 00:24:03.473 "method": "bdev_raid_create", 00:24:03.473 "req_id": 1 00:24:03.473 } 00:24:03.473 Got JSON-RPC error response 00:24:03.473 response: 00:24:03.473 { 00:24:03.473 "code": -17, 00:24:03.473 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:03.473 } 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.473 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:03.473 [2024-10-15 09:23:47.341561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:03.473 [2024-10-15 09:23:47.341777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.473 [2024-10-15 09:23:47.341930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:03.473 [2024-10-15 09:23:47.342061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.473 [2024-10-15 09:23:47.345283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.474 [2024-10-15 09:23:47.345464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:03.474 [2024-10-15 09:23:47.345737] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:03.474 [2024-10-15 09:23:47.345923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:03.474 pt1 00:24:03.474 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.474 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:24:03.474 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:03.474 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:03.474 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:03.474 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:03.474 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:03.474 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:03.474 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:03.474 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:03.474 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:03.474 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.474 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.474 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.474 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:03.474 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.733 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:03.733 "name": "raid_bdev1", 00:24:03.733 "uuid": "1707f5b2-69d1-47df-9ec7-a54483734d7e", 00:24:03.733 "strip_size_kb": 0, 00:24:03.733 "state": "configuring", 00:24:03.733 "raid_level": "raid1", 00:24:03.733 "superblock": true, 00:24:03.733 "num_base_bdevs": 2, 00:24:03.733 "num_base_bdevs_discovered": 1, 00:24:03.733 "num_base_bdevs_operational": 2, 00:24:03.733 "base_bdevs_list": [ 00:24:03.733 { 00:24:03.733 "name": "pt1", 00:24:03.733 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:03.733 "is_configured": true, 00:24:03.733 "data_offset": 256, 00:24:03.733 "data_size": 7936 00:24:03.733 }, 00:24:03.733 { 00:24:03.733 "name": null, 00:24:03.733 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:03.733 "is_configured": false, 00:24:03.733 "data_offset": 256, 00:24:03.733 "data_size": 7936 00:24:03.733 } 00:24:03.733 ] 00:24:03.733 }' 00:24:03.733 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:03.733 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:03.992 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:24:03.992 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:03.992 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:03.992 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:03.993 [2024-10-15 09:23:47.861972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:03.993 [2024-10-15 09:23:47.862222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.993 [2024-10-15 09:23:47.862273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:03.993 [2024-10-15 09:23:47.862292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.993 [2024-10-15 09:23:47.862978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.993 [2024-10-15 09:23:47.863016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:03.993 [2024-10-15 09:23:47.863144] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:03.993 [2024-10-15 09:23:47.863189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:03.993 [2024-10-15 09:23:47.863350] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:03.993 [2024-10-15 09:23:47.863382] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:03.993 [2024-10-15 09:23:47.863693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:03.993 [2024-10-15 09:23:47.863900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:03.993 [2024-10-15 09:23:47.863916] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:03.993 [2024-10-15 09:23:47.864095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:03.993 pt2 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:03.993 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.251 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:04.251 "name": "raid_bdev1", 00:24:04.251 "uuid": "1707f5b2-69d1-47df-9ec7-a54483734d7e", 00:24:04.251 "strip_size_kb": 0, 00:24:04.252 "state": "online", 00:24:04.252 "raid_level": "raid1", 00:24:04.252 "superblock": true, 00:24:04.252 "num_base_bdevs": 2, 00:24:04.252 "num_base_bdevs_discovered": 2, 00:24:04.252 "num_base_bdevs_operational": 2, 00:24:04.252 "base_bdevs_list": [ 00:24:04.252 { 00:24:04.252 "name": "pt1", 00:24:04.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:04.252 "is_configured": true, 00:24:04.252 "data_offset": 256, 00:24:04.252 "data_size": 7936 00:24:04.252 }, 00:24:04.252 { 00:24:04.252 "name": "pt2", 00:24:04.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:04.252 "is_configured": true, 00:24:04.252 "data_offset": 256, 00:24:04.252 "data_size": 7936 00:24:04.252 } 00:24:04.252 ] 00:24:04.252 }' 00:24:04.252 09:23:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:04.252 09:23:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:04.510 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:04.510 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:04.510 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:04.510 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:04.510 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:24:04.510 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:04.510 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:04.510 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.510 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:04.510 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:04.510 [2024-10-15 09:23:48.422523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:04.769 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.769 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:04.769 "name": "raid_bdev1", 00:24:04.769 "aliases": [ 00:24:04.769 "1707f5b2-69d1-47df-9ec7-a54483734d7e" 00:24:04.769 ], 00:24:04.769 "product_name": "Raid Volume", 00:24:04.769 "block_size": 4096, 00:24:04.769 "num_blocks": 7936, 00:24:04.769 "uuid": "1707f5b2-69d1-47df-9ec7-a54483734d7e", 00:24:04.769 "assigned_rate_limits": { 00:24:04.769 "rw_ios_per_sec": 0, 00:24:04.769 "rw_mbytes_per_sec": 0, 00:24:04.769 "r_mbytes_per_sec": 0, 00:24:04.769 "w_mbytes_per_sec": 0 00:24:04.769 }, 00:24:04.769 "claimed": false, 00:24:04.769 "zoned": false, 00:24:04.769 "supported_io_types": { 00:24:04.769 "read": true, 00:24:04.769 "write": true, 00:24:04.769 "unmap": false, 00:24:04.769 "flush": false, 00:24:04.769 "reset": true, 00:24:04.769 "nvme_admin": false, 00:24:04.769 "nvme_io": false, 00:24:04.769 "nvme_io_md": false, 00:24:04.769 "write_zeroes": true, 00:24:04.769 "zcopy": false, 00:24:04.769 "get_zone_info": false, 00:24:04.769 "zone_management": false, 00:24:04.769 "zone_append": false, 00:24:04.769 "compare": false, 00:24:04.769 "compare_and_write": false, 00:24:04.769 "abort": false, 00:24:04.769 "seek_hole": false, 00:24:04.769 "seek_data": false, 00:24:04.769 "copy": false, 00:24:04.769 "nvme_iov_md": false 00:24:04.769 }, 00:24:04.769 "memory_domains": [ 00:24:04.769 { 00:24:04.769 "dma_device_id": "system", 00:24:04.769 "dma_device_type": 1 00:24:04.769 }, 00:24:04.769 { 00:24:04.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:04.769 "dma_device_type": 2 00:24:04.769 }, 00:24:04.769 { 00:24:04.769 "dma_device_id": "system", 00:24:04.769 "dma_device_type": 1 00:24:04.769 }, 00:24:04.769 { 00:24:04.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:04.769 "dma_device_type": 2 00:24:04.769 } 00:24:04.769 ], 00:24:04.769 "driver_specific": { 00:24:04.769 "raid": { 00:24:04.769 "uuid": "1707f5b2-69d1-47df-9ec7-a54483734d7e", 00:24:04.769 "strip_size_kb": 0, 00:24:04.769 "state": "online", 00:24:04.769 "raid_level": "raid1", 00:24:04.769 "superblock": true, 00:24:04.769 "num_base_bdevs": 2, 00:24:04.769 "num_base_bdevs_discovered": 2, 00:24:04.769 "num_base_bdevs_operational": 2, 00:24:04.769 "base_bdevs_list": [ 00:24:04.769 { 00:24:04.769 "name": "pt1", 00:24:04.769 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:04.769 "is_configured": true, 00:24:04.769 "data_offset": 256, 00:24:04.769 "data_size": 7936 00:24:04.769 }, 00:24:04.769 { 00:24:04.769 "name": "pt2", 00:24:04.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:04.769 "is_configured": true, 00:24:04.769 "data_offset": 256, 00:24:04.769 "data_size": 7936 00:24:04.769 } 00:24:04.769 ] 00:24:04.769 } 00:24:04.769 } 00:24:04.769 }' 00:24:04.769 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:04.769 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:04.769 pt2' 00:24:04.769 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:04.769 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:24:04.769 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:04.769 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:04.769 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:04.769 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.769 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:04.769 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.770 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:04.770 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:04.770 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:04.770 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:04.770 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:04.770 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.770 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:04.770 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.770 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:04.770 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:04.770 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:04.770 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:04.770 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.770 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:05.029 [2024-10-15 09:23:48.698616] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 1707f5b2-69d1-47df-9ec7-a54483734d7e '!=' 1707f5b2-69d1-47df-9ec7-a54483734d7e ']' 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:05.029 [2024-10-15 09:23:48.746319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:05.029 "name": "raid_bdev1", 00:24:05.029 "uuid": "1707f5b2-69d1-47df-9ec7-a54483734d7e", 00:24:05.029 "strip_size_kb": 0, 00:24:05.029 "state": "online", 00:24:05.029 "raid_level": "raid1", 00:24:05.029 "superblock": true, 00:24:05.029 "num_base_bdevs": 2, 00:24:05.029 "num_base_bdevs_discovered": 1, 00:24:05.029 "num_base_bdevs_operational": 1, 00:24:05.029 "base_bdevs_list": [ 00:24:05.029 { 00:24:05.029 "name": null, 00:24:05.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.029 "is_configured": false, 00:24:05.029 "data_offset": 0, 00:24:05.029 "data_size": 7936 00:24:05.029 }, 00:24:05.029 { 00:24:05.029 "name": "pt2", 00:24:05.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:05.029 "is_configured": true, 00:24:05.029 "data_offset": 256, 00:24:05.029 "data_size": 7936 00:24:05.029 } 00:24:05.029 ] 00:24:05.029 }' 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:05.029 09:23:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:05.598 [2024-10-15 09:23:49.278458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:05.598 [2024-10-15 09:23:49.278645] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:05.598 [2024-10-15 09:23:49.278784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:05.598 [2024-10-15 09:23:49.278860] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:05.598 [2024-10-15 09:23:49.278894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:05.598 [2024-10-15 09:23:49.354508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:05.598 [2024-10-15 09:23:49.354605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.598 [2024-10-15 09:23:49.354635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:05.598 [2024-10-15 09:23:49.354653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.598 [2024-10-15 09:23:49.357839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.598 [2024-10-15 09:23:49.358058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:05.598 [2024-10-15 09:23:49.358236] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:05.598 [2024-10-15 09:23:49.358315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:05.598 [2024-10-15 09:23:49.358473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:05.598 [2024-10-15 09:23:49.358497] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:05.598 [2024-10-15 09:23:49.358819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:05.598 [2024-10-15 09:23:49.359028] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:05.598 [2024-10-15 09:23:49.359044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:24:05.598 [2024-10-15 09:23:49.359314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.598 pt2 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:05.598 "name": "raid_bdev1", 00:24:05.598 "uuid": "1707f5b2-69d1-47df-9ec7-a54483734d7e", 00:24:05.598 "strip_size_kb": 0, 00:24:05.598 "state": "online", 00:24:05.598 "raid_level": "raid1", 00:24:05.598 "superblock": true, 00:24:05.598 "num_base_bdevs": 2, 00:24:05.598 "num_base_bdevs_discovered": 1, 00:24:05.598 "num_base_bdevs_operational": 1, 00:24:05.598 "base_bdevs_list": [ 00:24:05.598 { 00:24:05.598 "name": null, 00:24:05.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.598 "is_configured": false, 00:24:05.598 "data_offset": 256, 00:24:05.598 "data_size": 7936 00:24:05.598 }, 00:24:05.598 { 00:24:05.598 "name": "pt2", 00:24:05.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:05.598 "is_configured": true, 00:24:05.598 "data_offset": 256, 00:24:05.598 "data_size": 7936 00:24:05.598 } 00:24:05.598 ] 00:24:05.598 }' 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:05.598 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:06.168 [2024-10-15 09:23:49.906784] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:06.168 [2024-10-15 09:23:49.907026] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:06.168 [2024-10-15 09:23:49.907183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:06.168 [2024-10-15 09:23:49.907263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:06.168 [2024-10-15 09:23:49.907279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:06.168 [2024-10-15 09:23:49.970805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:06.168 [2024-10-15 09:23:49.971046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:06.168 [2024-10-15 09:23:49.971092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:06.168 [2024-10-15 09:23:49.971109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:06.168 [2024-10-15 09:23:49.974319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:06.168 [2024-10-15 09:23:49.974366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:06.168 [2024-10-15 09:23:49.974509] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:06.168 [2024-10-15 09:23:49.974576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:06.168 [2024-10-15 09:23:49.974764] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:06.168 [2024-10-15 09:23:49.974782] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:06.168 [2024-10-15 09:23:49.974808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:24:06.168 [2024-10-15 09:23:49.974887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:06.168 [2024-10-15 09:23:49.975060] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:24:06.168 [2024-10-15 09:23:49.975077] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:06.168 pt1 00:24:06.168 [2024-10-15 09:23:49.975431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:06.168 [2024-10-15 09:23:49.975634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:24:06.168 [2024-10-15 09:23:49.975655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:24:06.168 [2024-10-15 09:23:49.975843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.168 09:23:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.168 09:23:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:06.168 "name": "raid_bdev1", 00:24:06.168 "uuid": "1707f5b2-69d1-47df-9ec7-a54483734d7e", 00:24:06.168 "strip_size_kb": 0, 00:24:06.168 "state": "online", 00:24:06.168 "raid_level": "raid1", 00:24:06.168 "superblock": true, 00:24:06.168 "num_base_bdevs": 2, 00:24:06.168 "num_base_bdevs_discovered": 1, 00:24:06.169 "num_base_bdevs_operational": 1, 00:24:06.169 "base_bdevs_list": [ 00:24:06.169 { 00:24:06.169 "name": null, 00:24:06.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.169 "is_configured": false, 00:24:06.169 "data_offset": 256, 00:24:06.169 "data_size": 7936 00:24:06.169 }, 00:24:06.169 { 00:24:06.169 "name": "pt2", 00:24:06.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:06.169 "is_configured": true, 00:24:06.169 "data_offset": 256, 00:24:06.169 "data_size": 7936 00:24:06.169 } 00:24:06.169 ] 00:24:06.169 }' 00:24:06.169 09:23:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:06.169 09:23:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:06.736 [2024-10-15 09:23:50.567360] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 1707f5b2-69d1-47df-9ec7-a54483734d7e '!=' 1707f5b2-69d1-47df-9ec7-a54483734d7e ']' 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86907 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 86907 ']' 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 86907 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86907 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86907' 00:24:06.736 killing process with pid 86907 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 86907 00:24:06.736 [2024-10-15 09:23:50.648802] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:06.736 09:23:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 86907 00:24:06.736 [2024-10-15 09:23:50.649094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:06.736 [2024-10-15 09:23:50.649192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:06.736 [2024-10-15 09:23:50.649219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:24:06.996 [2024-10-15 09:23:50.850630] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:08.376 09:23:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:24:08.376 00:24:08.376 real 0m6.981s 00:24:08.376 user 0m10.955s 00:24:08.376 sys 0m1.107s 00:24:08.376 09:23:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:08.376 09:23:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:08.376 ************************************ 00:24:08.376 END TEST raid_superblock_test_4k 00:24:08.376 ************************************ 00:24:08.376 09:23:52 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:24:08.376 09:23:52 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:24:08.376 09:23:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:24:08.376 09:23:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:08.376 09:23:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:08.376 ************************************ 00:24:08.376 START TEST raid_rebuild_test_sb_4k 00:24:08.376 ************************************ 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87241 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87241 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 87241 ']' 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:08.376 09:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:08.376 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:08.376 Zero copy mechanism will not be used. 00:24:08.376 [2024-10-15 09:23:52.159961] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:24:08.376 [2024-10-15 09:23:52.160217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87241 ] 00:24:08.635 [2024-10-15 09:23:52.339953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.635 [2024-10-15 09:23:52.488187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.895 [2024-10-15 09:23:52.734320] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:08.895 [2024-10-15 09:23:52.734378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:09.464 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:09.464 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:24:09.464 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:09.464 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:24:09.464 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.464 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:09.464 BaseBdev1_malloc 00:24:09.464 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.464 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:09.464 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.464 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:09.464 [2024-10-15 09:23:53.222746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:09.464 [2024-10-15 09:23:53.222906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:09.464 [2024-10-15 09:23:53.222948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:09.464 [2024-10-15 09:23:53.222969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:09.464 [2024-10-15 09:23:53.226254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:09.464 [2024-10-15 09:23:53.226305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:09.464 BaseBdev1 00:24:09.464 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.464 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:09.464 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:09.465 BaseBdev2_malloc 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:09.465 [2024-10-15 09:23:53.283655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:09.465 [2024-10-15 09:23:53.283767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:09.465 [2024-10-15 09:23:53.283798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:09.465 [2024-10-15 09:23:53.283816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:09.465 [2024-10-15 09:23:53.286907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:09.465 [2024-10-15 09:23:53.286957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:09.465 BaseBdev2 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:09.465 spare_malloc 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:09.465 spare_delay 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:09.465 [2024-10-15 09:23:53.367859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:09.465 [2024-10-15 09:23:53.367948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:09.465 [2024-10-15 09:23:53.367979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:09.465 [2024-10-15 09:23:53.367998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:09.465 [2024-10-15 09:23:53.371099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:09.465 [2024-10-15 09:23:53.371176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:09.465 spare 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:09.465 [2024-10-15 09:23:53.376049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:09.465 [2024-10-15 09:23:53.378855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:09.465 [2024-10-15 09:23:53.379101] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:09.465 [2024-10-15 09:23:53.379153] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:09.465 [2024-10-15 09:23:53.379508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:09.465 [2024-10-15 09:23:53.379754] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:09.465 [2024-10-15 09:23:53.379774] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:09.465 [2024-10-15 09:23:53.380006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.465 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:09.725 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.725 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:09.725 "name": "raid_bdev1", 00:24:09.725 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:09.725 "strip_size_kb": 0, 00:24:09.725 "state": "online", 00:24:09.725 "raid_level": "raid1", 00:24:09.725 "superblock": true, 00:24:09.725 "num_base_bdevs": 2, 00:24:09.725 "num_base_bdevs_discovered": 2, 00:24:09.725 "num_base_bdevs_operational": 2, 00:24:09.725 "base_bdevs_list": [ 00:24:09.725 { 00:24:09.725 "name": "BaseBdev1", 00:24:09.725 "uuid": "62f1336d-49f4-5562-9e9a-b388fc1f61c0", 00:24:09.725 "is_configured": true, 00:24:09.725 "data_offset": 256, 00:24:09.725 "data_size": 7936 00:24:09.725 }, 00:24:09.725 { 00:24:09.725 "name": "BaseBdev2", 00:24:09.725 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:09.725 "is_configured": true, 00:24:09.725 "data_offset": 256, 00:24:09.725 "data_size": 7936 00:24:09.725 } 00:24:09.725 ] 00:24:09.725 }' 00:24:09.725 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:09.725 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:09.985 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:09.985 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.985 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:09.985 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:09.985 [2024-10-15 09:23:53.896693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:10.244 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.244 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:24:10.244 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:10.244 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.244 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.244 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:10.245 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.245 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:24:10.245 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:10.245 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:24:10.245 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:24:10.245 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:24:10.245 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:10.245 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:10.245 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:10.245 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:10.245 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:10.245 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:24:10.245 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:10.245 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:10.245 09:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:10.504 [2024-10-15 09:23:54.288502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:10.504 /dev/nbd0 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:10.504 1+0 records in 00:24:10.504 1+0 records out 00:24:10.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026895 s, 15.2 MB/s 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:24:10.504 09:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:24:11.441 7936+0 records in 00:24:11.441 7936+0 records out 00:24:11.441 32505856 bytes (33 MB, 31 MiB) copied, 0.907389 s, 35.8 MB/s 00:24:11.441 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:11.441 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:11.441 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:11.441 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:11.441 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:24:11.441 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:11.441 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:12.009 [2024-10-15 09:23:55.643251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:12.009 [2024-10-15 09:23:55.659419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.009 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:12.009 "name": "raid_bdev1", 00:24:12.009 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:12.009 "strip_size_kb": 0, 00:24:12.009 "state": "online", 00:24:12.009 "raid_level": "raid1", 00:24:12.009 "superblock": true, 00:24:12.009 "num_base_bdevs": 2, 00:24:12.009 "num_base_bdevs_discovered": 1, 00:24:12.009 "num_base_bdevs_operational": 1, 00:24:12.009 "base_bdevs_list": [ 00:24:12.009 { 00:24:12.009 "name": null, 00:24:12.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.010 "is_configured": false, 00:24:12.010 "data_offset": 0, 00:24:12.010 "data_size": 7936 00:24:12.010 }, 00:24:12.010 { 00:24:12.010 "name": "BaseBdev2", 00:24:12.010 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:12.010 "is_configured": true, 00:24:12.010 "data_offset": 256, 00:24:12.010 "data_size": 7936 00:24:12.010 } 00:24:12.010 ] 00:24:12.010 }' 00:24:12.010 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:12.010 09:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:12.268 09:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:12.268 09:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.268 09:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:12.268 [2024-10-15 09:23:56.159702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:12.268 [2024-10-15 09:23:56.177975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:24:12.268 09:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.268 09:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:12.268 [2024-10-15 09:23:56.180699] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:13.663 "name": "raid_bdev1", 00:24:13.663 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:13.663 "strip_size_kb": 0, 00:24:13.663 "state": "online", 00:24:13.663 "raid_level": "raid1", 00:24:13.663 "superblock": true, 00:24:13.663 "num_base_bdevs": 2, 00:24:13.663 "num_base_bdevs_discovered": 2, 00:24:13.663 "num_base_bdevs_operational": 2, 00:24:13.663 "process": { 00:24:13.663 "type": "rebuild", 00:24:13.663 "target": "spare", 00:24:13.663 "progress": { 00:24:13.663 "blocks": 2560, 00:24:13.663 "percent": 32 00:24:13.663 } 00:24:13.663 }, 00:24:13.663 "base_bdevs_list": [ 00:24:13.663 { 00:24:13.663 "name": "spare", 00:24:13.663 "uuid": "76226e7f-14d3-52fc-bf81-c1c9936605e8", 00:24:13.663 "is_configured": true, 00:24:13.663 "data_offset": 256, 00:24:13.663 "data_size": 7936 00:24:13.663 }, 00:24:13.663 { 00:24:13.663 "name": "BaseBdev2", 00:24:13.663 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:13.663 "is_configured": true, 00:24:13.663 "data_offset": 256, 00:24:13.663 "data_size": 7936 00:24:13.663 } 00:24:13.663 ] 00:24:13.663 }' 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:13.663 [2024-10-15 09:23:57.346906] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:13.663 [2024-10-15 09:23:57.392565] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:13.663 [2024-10-15 09:23:57.392705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:13.663 [2024-10-15 09:23:57.392732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:13.663 [2024-10-15 09:23:57.392753] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:13.663 "name": "raid_bdev1", 00:24:13.663 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:13.663 "strip_size_kb": 0, 00:24:13.663 "state": "online", 00:24:13.663 "raid_level": "raid1", 00:24:13.663 "superblock": true, 00:24:13.663 "num_base_bdevs": 2, 00:24:13.663 "num_base_bdevs_discovered": 1, 00:24:13.663 "num_base_bdevs_operational": 1, 00:24:13.663 "base_bdevs_list": [ 00:24:13.663 { 00:24:13.663 "name": null, 00:24:13.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.663 "is_configured": false, 00:24:13.663 "data_offset": 0, 00:24:13.663 "data_size": 7936 00:24:13.663 }, 00:24:13.663 { 00:24:13.663 "name": "BaseBdev2", 00:24:13.663 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:13.663 "is_configured": true, 00:24:13.663 "data_offset": 256, 00:24:13.663 "data_size": 7936 00:24:13.663 } 00:24:13.663 ] 00:24:13.663 }' 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:13.663 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:14.230 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:14.230 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:14.230 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:14.230 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:14.230 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:14.230 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.230 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.230 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.230 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:14.230 09:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.230 09:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:14.230 "name": "raid_bdev1", 00:24:14.230 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:14.230 "strip_size_kb": 0, 00:24:14.230 "state": "online", 00:24:14.230 "raid_level": "raid1", 00:24:14.230 "superblock": true, 00:24:14.230 "num_base_bdevs": 2, 00:24:14.230 "num_base_bdevs_discovered": 1, 00:24:14.230 "num_base_bdevs_operational": 1, 00:24:14.230 "base_bdevs_list": [ 00:24:14.230 { 00:24:14.230 "name": null, 00:24:14.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.230 "is_configured": false, 00:24:14.230 "data_offset": 0, 00:24:14.230 "data_size": 7936 00:24:14.230 }, 00:24:14.230 { 00:24:14.230 "name": "BaseBdev2", 00:24:14.230 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:14.230 "is_configured": true, 00:24:14.230 "data_offset": 256, 00:24:14.230 "data_size": 7936 00:24:14.230 } 00:24:14.230 ] 00:24:14.230 }' 00:24:14.230 09:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:14.230 09:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:14.230 09:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:14.230 09:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:14.230 09:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:14.231 09:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.231 09:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:14.231 [2024-10-15 09:23:58.119194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:14.231 [2024-10-15 09:23:58.136449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:24:14.231 09:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.231 09:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:14.231 [2024-10-15 09:23:58.139315] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:15.607 "name": "raid_bdev1", 00:24:15.607 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:15.607 "strip_size_kb": 0, 00:24:15.607 "state": "online", 00:24:15.607 "raid_level": "raid1", 00:24:15.607 "superblock": true, 00:24:15.607 "num_base_bdevs": 2, 00:24:15.607 "num_base_bdevs_discovered": 2, 00:24:15.607 "num_base_bdevs_operational": 2, 00:24:15.607 "process": { 00:24:15.607 "type": "rebuild", 00:24:15.607 "target": "spare", 00:24:15.607 "progress": { 00:24:15.607 "blocks": 2560, 00:24:15.607 "percent": 32 00:24:15.607 } 00:24:15.607 }, 00:24:15.607 "base_bdevs_list": [ 00:24:15.607 { 00:24:15.607 "name": "spare", 00:24:15.607 "uuid": "76226e7f-14d3-52fc-bf81-c1c9936605e8", 00:24:15.607 "is_configured": true, 00:24:15.607 "data_offset": 256, 00:24:15.607 "data_size": 7936 00:24:15.607 }, 00:24:15.607 { 00:24:15.607 "name": "BaseBdev2", 00:24:15.607 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:15.607 "is_configured": true, 00:24:15.607 "data_offset": 256, 00:24:15.607 "data_size": 7936 00:24:15.607 } 00:24:15.607 ] 00:24:15.607 }' 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:15.607 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=750 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:15.607 "name": "raid_bdev1", 00:24:15.607 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:15.607 "strip_size_kb": 0, 00:24:15.607 "state": "online", 00:24:15.607 "raid_level": "raid1", 00:24:15.607 "superblock": true, 00:24:15.607 "num_base_bdevs": 2, 00:24:15.607 "num_base_bdevs_discovered": 2, 00:24:15.607 "num_base_bdevs_operational": 2, 00:24:15.607 "process": { 00:24:15.607 "type": "rebuild", 00:24:15.607 "target": "spare", 00:24:15.607 "progress": { 00:24:15.607 "blocks": 2816, 00:24:15.607 "percent": 35 00:24:15.607 } 00:24:15.607 }, 00:24:15.607 "base_bdevs_list": [ 00:24:15.607 { 00:24:15.607 "name": "spare", 00:24:15.607 "uuid": "76226e7f-14d3-52fc-bf81-c1c9936605e8", 00:24:15.607 "is_configured": true, 00:24:15.607 "data_offset": 256, 00:24:15.607 "data_size": 7936 00:24:15.607 }, 00:24:15.607 { 00:24:15.607 "name": "BaseBdev2", 00:24:15.607 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:15.607 "is_configured": true, 00:24:15.607 "data_offset": 256, 00:24:15.607 "data_size": 7936 00:24:15.607 } 00:24:15.607 ] 00:24:15.607 }' 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:15.607 09:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:16.985 09:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:16.985 09:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:16.985 09:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:16.985 09:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:16.985 09:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:16.985 09:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:16.985 09:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.985 09:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.985 09:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.985 09:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:16.985 09:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.985 09:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:16.985 "name": "raid_bdev1", 00:24:16.985 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:16.985 "strip_size_kb": 0, 00:24:16.985 "state": "online", 00:24:16.985 "raid_level": "raid1", 00:24:16.985 "superblock": true, 00:24:16.985 "num_base_bdevs": 2, 00:24:16.985 "num_base_bdevs_discovered": 2, 00:24:16.985 "num_base_bdevs_operational": 2, 00:24:16.985 "process": { 00:24:16.985 "type": "rebuild", 00:24:16.985 "target": "spare", 00:24:16.985 "progress": { 00:24:16.985 "blocks": 5888, 00:24:16.985 "percent": 74 00:24:16.985 } 00:24:16.985 }, 00:24:16.985 "base_bdevs_list": [ 00:24:16.985 { 00:24:16.985 "name": "spare", 00:24:16.985 "uuid": "76226e7f-14d3-52fc-bf81-c1c9936605e8", 00:24:16.985 "is_configured": true, 00:24:16.985 "data_offset": 256, 00:24:16.985 "data_size": 7936 00:24:16.985 }, 00:24:16.985 { 00:24:16.985 "name": "BaseBdev2", 00:24:16.985 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:16.985 "is_configured": true, 00:24:16.985 "data_offset": 256, 00:24:16.985 "data_size": 7936 00:24:16.985 } 00:24:16.985 ] 00:24:16.985 }' 00:24:16.985 09:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:16.985 09:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:16.985 09:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:16.985 09:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:16.985 09:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:17.613 [2024-10-15 09:24:01.268069] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:17.613 [2024-10-15 09:24:01.268440] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:17.613 [2024-10-15 09:24:01.268623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:17.872 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:17.872 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:17.872 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:17.872 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:17.872 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:17.872 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:17.872 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.872 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:17.872 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.872 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:17.872 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.872 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:17.872 "name": "raid_bdev1", 00:24:17.872 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:17.872 "strip_size_kb": 0, 00:24:17.872 "state": "online", 00:24:17.872 "raid_level": "raid1", 00:24:17.872 "superblock": true, 00:24:17.872 "num_base_bdevs": 2, 00:24:17.872 "num_base_bdevs_discovered": 2, 00:24:17.872 "num_base_bdevs_operational": 2, 00:24:17.872 "base_bdevs_list": [ 00:24:17.872 { 00:24:17.872 "name": "spare", 00:24:17.872 "uuid": "76226e7f-14d3-52fc-bf81-c1c9936605e8", 00:24:17.872 "is_configured": true, 00:24:17.872 "data_offset": 256, 00:24:17.872 "data_size": 7936 00:24:17.872 }, 00:24:17.872 { 00:24:17.872 "name": "BaseBdev2", 00:24:17.872 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:17.872 "is_configured": true, 00:24:17.872 "data_offset": 256, 00:24:17.872 "data_size": 7936 00:24:17.872 } 00:24:17.872 ] 00:24:17.872 }' 00:24:17.872 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:17.872 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:17.872 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:18.130 "name": "raid_bdev1", 00:24:18.130 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:18.130 "strip_size_kb": 0, 00:24:18.130 "state": "online", 00:24:18.130 "raid_level": "raid1", 00:24:18.130 "superblock": true, 00:24:18.130 "num_base_bdevs": 2, 00:24:18.130 "num_base_bdevs_discovered": 2, 00:24:18.130 "num_base_bdevs_operational": 2, 00:24:18.130 "base_bdevs_list": [ 00:24:18.130 { 00:24:18.130 "name": "spare", 00:24:18.130 "uuid": "76226e7f-14d3-52fc-bf81-c1c9936605e8", 00:24:18.130 "is_configured": true, 00:24:18.130 "data_offset": 256, 00:24:18.130 "data_size": 7936 00:24:18.130 }, 00:24:18.130 { 00:24:18.130 "name": "BaseBdev2", 00:24:18.130 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:18.130 "is_configured": true, 00:24:18.130 "data_offset": 256, 00:24:18.130 "data_size": 7936 00:24:18.130 } 00:24:18.130 ] 00:24:18.130 }' 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:18.130 09:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.130 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:18.130 "name": "raid_bdev1", 00:24:18.130 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:18.130 "strip_size_kb": 0, 00:24:18.130 "state": "online", 00:24:18.130 "raid_level": "raid1", 00:24:18.130 "superblock": true, 00:24:18.130 "num_base_bdevs": 2, 00:24:18.130 "num_base_bdevs_discovered": 2, 00:24:18.130 "num_base_bdevs_operational": 2, 00:24:18.130 "base_bdevs_list": [ 00:24:18.130 { 00:24:18.130 "name": "spare", 00:24:18.130 "uuid": "76226e7f-14d3-52fc-bf81-c1c9936605e8", 00:24:18.130 "is_configured": true, 00:24:18.130 "data_offset": 256, 00:24:18.130 "data_size": 7936 00:24:18.130 }, 00:24:18.130 { 00:24:18.130 "name": "BaseBdev2", 00:24:18.130 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:18.130 "is_configured": true, 00:24:18.130 "data_offset": 256, 00:24:18.130 "data_size": 7936 00:24:18.130 } 00:24:18.130 ] 00:24:18.130 }' 00:24:18.130 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:18.130 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:18.698 [2024-10-15 09:24:02.503465] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:18.698 [2024-10-15 09:24:02.503701] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:18.698 [2024-10-15 09:24:02.503841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:18.698 [2024-10-15 09:24:02.503946] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:18.698 [2024-10-15 09:24:02.503963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:18.698 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:19.266 /dev/nbd0 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:19.266 1+0 records in 00:24:19.266 1+0 records out 00:24:19.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638608 s, 6.4 MB/s 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:19.266 09:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:19.525 /dev/nbd1 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:19.525 1+0 records in 00:24:19.525 1+0 records out 00:24:19.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427394 s, 9.6 MB/s 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:19.525 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:20.093 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:20.093 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:20.093 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:20.093 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:20.093 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:20.093 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:20.093 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:20.093 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:20.093 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:20.093 09:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:20.352 [2024-10-15 09:24:04.042638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:20.352 [2024-10-15 09:24:04.042738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.352 [2024-10-15 09:24:04.042773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:20.352 [2024-10-15 09:24:04.042789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.352 [2024-10-15 09:24:04.046054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.352 [2024-10-15 09:24:04.046278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:20.352 [2024-10-15 09:24:04.046439] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:20.352 [2024-10-15 09:24:04.046520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:20.352 [2024-10-15 09:24:04.046729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:20.352 spare 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:20.352 [2024-10-15 09:24:04.146974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:20.352 [2024-10-15 09:24:04.147052] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:20.352 [2024-10-15 09:24:04.147615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:24:20.352 [2024-10-15 09:24:04.147926] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:20.352 [2024-10-15 09:24:04.147950] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:20.352 [2024-10-15 09:24:04.148243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:20.352 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:20.353 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:20.353 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.353 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.353 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.353 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:20.353 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.353 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:20.353 "name": "raid_bdev1", 00:24:20.353 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:20.353 "strip_size_kb": 0, 00:24:20.353 "state": "online", 00:24:20.353 "raid_level": "raid1", 00:24:20.353 "superblock": true, 00:24:20.353 "num_base_bdevs": 2, 00:24:20.353 "num_base_bdevs_discovered": 2, 00:24:20.353 "num_base_bdevs_operational": 2, 00:24:20.353 "base_bdevs_list": [ 00:24:20.353 { 00:24:20.353 "name": "spare", 00:24:20.353 "uuid": "76226e7f-14d3-52fc-bf81-c1c9936605e8", 00:24:20.353 "is_configured": true, 00:24:20.353 "data_offset": 256, 00:24:20.353 "data_size": 7936 00:24:20.353 }, 00:24:20.353 { 00:24:20.353 "name": "BaseBdev2", 00:24:20.353 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:20.353 "is_configured": true, 00:24:20.353 "data_offset": 256, 00:24:20.353 "data_size": 7936 00:24:20.353 } 00:24:20.353 ] 00:24:20.353 }' 00:24:20.353 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:20.353 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:20.920 "name": "raid_bdev1", 00:24:20.920 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:20.920 "strip_size_kb": 0, 00:24:20.920 "state": "online", 00:24:20.920 "raid_level": "raid1", 00:24:20.920 "superblock": true, 00:24:20.920 "num_base_bdevs": 2, 00:24:20.920 "num_base_bdevs_discovered": 2, 00:24:20.920 "num_base_bdevs_operational": 2, 00:24:20.920 "base_bdevs_list": [ 00:24:20.920 { 00:24:20.920 "name": "spare", 00:24:20.920 "uuid": "76226e7f-14d3-52fc-bf81-c1c9936605e8", 00:24:20.920 "is_configured": true, 00:24:20.920 "data_offset": 256, 00:24:20.920 "data_size": 7936 00:24:20.920 }, 00:24:20.920 { 00:24:20.920 "name": "BaseBdev2", 00:24:20.920 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:20.920 "is_configured": true, 00:24:20.920 "data_offset": 256, 00:24:20.920 "data_size": 7936 00:24:20.920 } 00:24:20.920 ] 00:24:20.920 }' 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:20.920 [2024-10-15 09:24:04.839226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:20.920 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:21.179 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.179 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.179 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.179 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:21.179 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.179 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:21.179 "name": "raid_bdev1", 00:24:21.179 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:21.179 "strip_size_kb": 0, 00:24:21.179 "state": "online", 00:24:21.179 "raid_level": "raid1", 00:24:21.179 "superblock": true, 00:24:21.179 "num_base_bdevs": 2, 00:24:21.179 "num_base_bdevs_discovered": 1, 00:24:21.179 "num_base_bdevs_operational": 1, 00:24:21.179 "base_bdevs_list": [ 00:24:21.179 { 00:24:21.179 "name": null, 00:24:21.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.179 "is_configured": false, 00:24:21.179 "data_offset": 0, 00:24:21.179 "data_size": 7936 00:24:21.179 }, 00:24:21.179 { 00:24:21.179 "name": "BaseBdev2", 00:24:21.179 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:21.179 "is_configured": true, 00:24:21.179 "data_offset": 256, 00:24:21.179 "data_size": 7936 00:24:21.179 } 00:24:21.179 ] 00:24:21.179 }' 00:24:21.179 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:21.179 09:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:21.437 09:24:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:21.437 09:24:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.437 09:24:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:21.437 [2024-10-15 09:24:05.343431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:21.437 [2024-10-15 09:24:05.343748] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:21.437 [2024-10-15 09:24:05.343779] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:21.437 [2024-10-15 09:24:05.343843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:21.437 [2024-10-15 09:24:05.361072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:24:21.437 09:24:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.437 09:24:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:21.437 [2024-10-15 09:24:05.363965] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:22.814 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:22.814 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:22.814 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:22.814 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:22.814 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:22.814 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:22.814 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.814 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.814 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:22.814 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.814 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:22.814 "name": "raid_bdev1", 00:24:22.814 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:22.814 "strip_size_kb": 0, 00:24:22.814 "state": "online", 00:24:22.814 "raid_level": "raid1", 00:24:22.814 "superblock": true, 00:24:22.814 "num_base_bdevs": 2, 00:24:22.814 "num_base_bdevs_discovered": 2, 00:24:22.814 "num_base_bdevs_operational": 2, 00:24:22.814 "process": { 00:24:22.814 "type": "rebuild", 00:24:22.814 "target": "spare", 00:24:22.814 "progress": { 00:24:22.814 "blocks": 2560, 00:24:22.814 "percent": 32 00:24:22.814 } 00:24:22.814 }, 00:24:22.814 "base_bdevs_list": [ 00:24:22.814 { 00:24:22.814 "name": "spare", 00:24:22.814 "uuid": "76226e7f-14d3-52fc-bf81-c1c9936605e8", 00:24:22.814 "is_configured": true, 00:24:22.814 "data_offset": 256, 00:24:22.814 "data_size": 7936 00:24:22.814 }, 00:24:22.814 { 00:24:22.814 "name": "BaseBdev2", 00:24:22.814 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:22.814 "is_configured": true, 00:24:22.814 "data_offset": 256, 00:24:22.814 "data_size": 7936 00:24:22.814 } 00:24:22.814 ] 00:24:22.814 }' 00:24:22.814 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:22.814 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:22.814 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:22.814 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:22.814 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:22.814 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.814 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:22.814 [2024-10-15 09:24:06.529424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:22.814 [2024-10-15 09:24:06.574908] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:22.814 [2024-10-15 09:24:06.574995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:22.814 [2024-10-15 09:24:06.575020] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:22.815 [2024-10-15 09:24:06.575036] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:22.815 "name": "raid_bdev1", 00:24:22.815 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:22.815 "strip_size_kb": 0, 00:24:22.815 "state": "online", 00:24:22.815 "raid_level": "raid1", 00:24:22.815 "superblock": true, 00:24:22.815 "num_base_bdevs": 2, 00:24:22.815 "num_base_bdevs_discovered": 1, 00:24:22.815 "num_base_bdevs_operational": 1, 00:24:22.815 "base_bdevs_list": [ 00:24:22.815 { 00:24:22.815 "name": null, 00:24:22.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.815 "is_configured": false, 00:24:22.815 "data_offset": 0, 00:24:22.815 "data_size": 7936 00:24:22.815 }, 00:24:22.815 { 00:24:22.815 "name": "BaseBdev2", 00:24:22.815 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:22.815 "is_configured": true, 00:24:22.815 "data_offset": 256, 00:24:22.815 "data_size": 7936 00:24:22.815 } 00:24:22.815 ] 00:24:22.815 }' 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:22.815 09:24:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:23.382 09:24:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:23.382 09:24:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.382 09:24:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:23.382 [2024-10-15 09:24:07.136584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:23.382 [2024-10-15 09:24:07.136675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:23.382 [2024-10-15 09:24:07.136711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:23.382 [2024-10-15 09:24:07.136730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:23.382 [2024-10-15 09:24:07.137451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:23.382 [2024-10-15 09:24:07.137490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:23.382 [2024-10-15 09:24:07.137621] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:23.382 [2024-10-15 09:24:07.137654] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:23.382 [2024-10-15 09:24:07.137670] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:23.382 [2024-10-15 09:24:07.137711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:23.382 [2024-10-15 09:24:07.154145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:24:23.382 spare 00:24:23.382 09:24:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.382 09:24:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:23.382 [2024-10-15 09:24:07.156988] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:24.318 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:24.318 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:24.318 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:24.318 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:24.318 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:24.318 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.318 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.318 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.318 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.318 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.318 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:24.318 "name": "raid_bdev1", 00:24:24.318 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:24.318 "strip_size_kb": 0, 00:24:24.318 "state": "online", 00:24:24.318 "raid_level": "raid1", 00:24:24.318 "superblock": true, 00:24:24.318 "num_base_bdevs": 2, 00:24:24.318 "num_base_bdevs_discovered": 2, 00:24:24.318 "num_base_bdevs_operational": 2, 00:24:24.318 "process": { 00:24:24.318 "type": "rebuild", 00:24:24.318 "target": "spare", 00:24:24.318 "progress": { 00:24:24.318 "blocks": 2560, 00:24:24.318 "percent": 32 00:24:24.318 } 00:24:24.318 }, 00:24:24.318 "base_bdevs_list": [ 00:24:24.318 { 00:24:24.318 "name": "spare", 00:24:24.318 "uuid": "76226e7f-14d3-52fc-bf81-c1c9936605e8", 00:24:24.318 "is_configured": true, 00:24:24.318 "data_offset": 256, 00:24:24.318 "data_size": 7936 00:24:24.318 }, 00:24:24.318 { 00:24:24.318 "name": "BaseBdev2", 00:24:24.318 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:24.318 "is_configured": true, 00:24:24.318 "data_offset": 256, 00:24:24.318 "data_size": 7936 00:24:24.318 } 00:24:24.318 ] 00:24:24.318 }' 00:24:24.318 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:24.577 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:24.577 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:24.577 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:24.577 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:24.577 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.578 [2024-10-15 09:24:08.322597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:24.578 [2024-10-15 09:24:08.368188] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:24.578 [2024-10-15 09:24:08.368285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:24.578 [2024-10-15 09:24:08.368314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:24.578 [2024-10-15 09:24:08.368327] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:24.578 "name": "raid_bdev1", 00:24:24.578 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:24.578 "strip_size_kb": 0, 00:24:24.578 "state": "online", 00:24:24.578 "raid_level": "raid1", 00:24:24.578 "superblock": true, 00:24:24.578 "num_base_bdevs": 2, 00:24:24.578 "num_base_bdevs_discovered": 1, 00:24:24.578 "num_base_bdevs_operational": 1, 00:24:24.578 "base_bdevs_list": [ 00:24:24.578 { 00:24:24.578 "name": null, 00:24:24.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.578 "is_configured": false, 00:24:24.578 "data_offset": 0, 00:24:24.578 "data_size": 7936 00:24:24.578 }, 00:24:24.578 { 00:24:24.578 "name": "BaseBdev2", 00:24:24.578 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:24.578 "is_configured": true, 00:24:24.578 "data_offset": 256, 00:24:24.578 "data_size": 7936 00:24:24.578 } 00:24:24.578 ] 00:24:24.578 }' 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:24.578 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.146 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:25.146 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:25.146 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:25.146 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:25.146 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:25.146 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.146 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.146 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.146 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.146 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.146 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:25.146 "name": "raid_bdev1", 00:24:25.146 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:25.146 "strip_size_kb": 0, 00:24:25.146 "state": "online", 00:24:25.146 "raid_level": "raid1", 00:24:25.146 "superblock": true, 00:24:25.146 "num_base_bdevs": 2, 00:24:25.146 "num_base_bdevs_discovered": 1, 00:24:25.146 "num_base_bdevs_operational": 1, 00:24:25.146 "base_bdevs_list": [ 00:24:25.146 { 00:24:25.146 "name": null, 00:24:25.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.146 "is_configured": false, 00:24:25.146 "data_offset": 0, 00:24:25.146 "data_size": 7936 00:24:25.146 }, 00:24:25.146 { 00:24:25.146 "name": "BaseBdev2", 00:24:25.146 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:25.146 "is_configured": true, 00:24:25.146 "data_offset": 256, 00:24:25.146 "data_size": 7936 00:24:25.146 } 00:24:25.146 ] 00:24:25.146 }' 00:24:25.146 09:24:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:25.146 09:24:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:25.146 09:24:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:25.147 09:24:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:25.147 09:24:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:25.147 09:24:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.147 09:24:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.147 09:24:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.147 09:24:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:25.147 09:24:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.147 09:24:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.147 [2024-10-15 09:24:09.064138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:25.147 [2024-10-15 09:24:09.064361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.147 [2024-10-15 09:24:09.064411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:25.147 [2024-10-15 09:24:09.064441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.147 [2024-10-15 09:24:09.065092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.147 [2024-10-15 09:24:09.065136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:25.147 [2024-10-15 09:24:09.065263] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:25.147 [2024-10-15 09:24:09.065286] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:25.147 [2024-10-15 09:24:09.065301] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:25.147 [2024-10-15 09:24:09.065315] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:25.147 BaseBdev1 00:24:25.147 09:24:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.147 09:24:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:26.553 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:26.553 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:26.553 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:26.553 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:26.553 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:26.553 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:26.553 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:26.553 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:26.553 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:26.553 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:26.553 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.553 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.553 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.553 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.553 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.553 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:26.554 "name": "raid_bdev1", 00:24:26.554 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:26.554 "strip_size_kb": 0, 00:24:26.554 "state": "online", 00:24:26.554 "raid_level": "raid1", 00:24:26.554 "superblock": true, 00:24:26.554 "num_base_bdevs": 2, 00:24:26.554 "num_base_bdevs_discovered": 1, 00:24:26.554 "num_base_bdevs_operational": 1, 00:24:26.554 "base_bdevs_list": [ 00:24:26.554 { 00:24:26.554 "name": null, 00:24:26.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.554 "is_configured": false, 00:24:26.554 "data_offset": 0, 00:24:26.554 "data_size": 7936 00:24:26.554 }, 00:24:26.554 { 00:24:26.554 "name": "BaseBdev2", 00:24:26.554 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:26.554 "is_configured": true, 00:24:26.554 "data_offset": 256, 00:24:26.554 "data_size": 7936 00:24:26.554 } 00:24:26.554 ] 00:24:26.554 }' 00:24:26.554 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:26.554 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.813 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:26.814 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:26.814 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:26.814 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:26.814 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:26.814 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.814 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.814 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.814 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.814 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.814 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:26.814 "name": "raid_bdev1", 00:24:26.814 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:26.814 "strip_size_kb": 0, 00:24:26.814 "state": "online", 00:24:26.814 "raid_level": "raid1", 00:24:26.814 "superblock": true, 00:24:26.814 "num_base_bdevs": 2, 00:24:26.814 "num_base_bdevs_discovered": 1, 00:24:26.814 "num_base_bdevs_operational": 1, 00:24:26.814 "base_bdevs_list": [ 00:24:26.814 { 00:24:26.814 "name": null, 00:24:26.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.814 "is_configured": false, 00:24:26.814 "data_offset": 0, 00:24:26.814 "data_size": 7936 00:24:26.814 }, 00:24:26.814 { 00:24:26.814 "name": "BaseBdev2", 00:24:26.814 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:26.814 "is_configured": true, 00:24:26.814 "data_offset": 256, 00:24:26.814 "data_size": 7936 00:24:26.814 } 00:24:26.814 ] 00:24:26.814 }' 00:24:26.814 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:26.814 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:26.814 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:27.073 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:27.073 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:27.073 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:24:27.073 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:27.073 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:27.073 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:27.073 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:27.073 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:27.073 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:27.073 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.073 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:27.073 [2024-10-15 09:24:10.760790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:27.073 [2024-10-15 09:24:10.761054] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:27.073 [2024-10-15 09:24:10.761077] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:27.073 request: 00:24:27.073 { 00:24:27.073 "base_bdev": "BaseBdev1", 00:24:27.073 "raid_bdev": "raid_bdev1", 00:24:27.073 "method": "bdev_raid_add_base_bdev", 00:24:27.073 "req_id": 1 00:24:27.073 } 00:24:27.073 Got JSON-RPC error response 00:24:27.073 response: 00:24:27.073 { 00:24:27.073 "code": -22, 00:24:27.073 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:27.073 } 00:24:27.073 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:27.073 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:24:27.073 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:27.073 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:27.073 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:27.073 09:24:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:28.011 09:24:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:28.011 09:24:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:28.011 09:24:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:28.011 09:24:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:28.011 09:24:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:28.011 09:24:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:28.011 09:24:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:28.011 09:24:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:28.011 09:24:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:28.011 09:24:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:28.011 09:24:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.011 09:24:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.011 09:24:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.011 09:24:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:28.011 09:24:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.011 09:24:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:28.011 "name": "raid_bdev1", 00:24:28.011 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:28.011 "strip_size_kb": 0, 00:24:28.011 "state": "online", 00:24:28.011 "raid_level": "raid1", 00:24:28.011 "superblock": true, 00:24:28.011 "num_base_bdevs": 2, 00:24:28.011 "num_base_bdevs_discovered": 1, 00:24:28.011 "num_base_bdevs_operational": 1, 00:24:28.011 "base_bdevs_list": [ 00:24:28.011 { 00:24:28.011 "name": null, 00:24:28.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.011 "is_configured": false, 00:24:28.011 "data_offset": 0, 00:24:28.011 "data_size": 7936 00:24:28.011 }, 00:24:28.011 { 00:24:28.011 "name": "BaseBdev2", 00:24:28.011 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:28.011 "is_configured": true, 00:24:28.011 "data_offset": 256, 00:24:28.011 "data_size": 7936 00:24:28.011 } 00:24:28.011 ] 00:24:28.011 }' 00:24:28.011 09:24:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:28.011 09:24:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:28.578 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:28.578 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:28.578 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:28.578 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:28.578 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:28.579 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.579 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.579 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.579 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:28.579 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.579 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:28.579 "name": "raid_bdev1", 00:24:28.579 "uuid": "19c9e731-d06c-4d8d-8760-88c58fe9190b", 00:24:28.579 "strip_size_kb": 0, 00:24:28.579 "state": "online", 00:24:28.579 "raid_level": "raid1", 00:24:28.579 "superblock": true, 00:24:28.579 "num_base_bdevs": 2, 00:24:28.579 "num_base_bdevs_discovered": 1, 00:24:28.579 "num_base_bdevs_operational": 1, 00:24:28.579 "base_bdevs_list": [ 00:24:28.579 { 00:24:28.579 "name": null, 00:24:28.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.579 "is_configured": false, 00:24:28.579 "data_offset": 0, 00:24:28.579 "data_size": 7936 00:24:28.579 }, 00:24:28.579 { 00:24:28.579 "name": "BaseBdev2", 00:24:28.579 "uuid": "eff24db8-99c8-5714-9a7e-06249776963b", 00:24:28.579 "is_configured": true, 00:24:28.579 "data_offset": 256, 00:24:28.579 "data_size": 7936 00:24:28.579 } 00:24:28.579 ] 00:24:28.579 }' 00:24:28.579 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:28.579 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:28.579 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:28.579 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:28.579 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87241 00:24:28.579 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 87241 ']' 00:24:28.579 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 87241 00:24:28.579 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:24:28.838 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.838 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87241 00:24:28.838 killing process with pid 87241 00:24:28.838 Received shutdown signal, test time was about 60.000000 seconds 00:24:28.838 00:24:28.838 Latency(us) 00:24:28.838 [2024-10-15T09:24:12.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.838 [2024-10-15T09:24:12.766Z] =================================================================================================================== 00:24:28.838 [2024-10-15T09:24:12.766Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:28.838 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:28.838 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:28.838 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87241' 00:24:28.838 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 87241 00:24:28.838 [2024-10-15 09:24:12.535676] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:28.838 09:24:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 87241 00:24:28.838 [2024-10-15 09:24:12.535908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:28.838 [2024-10-15 09:24:12.536000] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:28.838 [2024-10-15 09:24:12.536022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:29.097 [2024-10-15 09:24:12.825137] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:30.085 09:24:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:24:30.085 00:24:30.085 real 0m21.905s 00:24:30.085 user 0m29.581s 00:24:30.085 sys 0m2.735s 00:24:30.085 09:24:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:30.085 09:24:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.085 ************************************ 00:24:30.085 END TEST raid_rebuild_test_sb_4k 00:24:30.085 ************************************ 00:24:30.085 09:24:13 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:24:30.085 09:24:13 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:24:30.085 09:24:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:30.085 09:24:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:30.085 09:24:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:30.085 ************************************ 00:24:30.085 START TEST raid_state_function_test_sb_md_separate 00:24:30.085 ************************************ 00:24:30.085 09:24:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87943 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:30.085 Process raid pid: 87943 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87943' 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87943 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87943 ']' 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.085 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.344 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.344 09:24:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:30.344 [2024-10-15 09:24:14.126372] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:24:30.344 [2024-10-15 09:24:14.126607] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.602 [2024-10-15 09:24:14.327967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.602 [2024-10-15 09:24:14.502278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.861 [2024-10-15 09:24:14.746511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:30.861 [2024-10-15 09:24:14.746578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:31.427 [2024-10-15 09:24:15.176085] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:31.427 [2024-10-15 09:24:15.176326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:31.427 [2024-10-15 09:24:15.176357] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:31.427 [2024-10-15 09:24:15.176377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:31.427 "name": "Existed_Raid", 00:24:31.427 "uuid": "dffb92e4-8c59-4388-807a-0bf71ff81340", 00:24:31.427 "strip_size_kb": 0, 00:24:31.427 "state": "configuring", 00:24:31.427 "raid_level": "raid1", 00:24:31.427 "superblock": true, 00:24:31.427 "num_base_bdevs": 2, 00:24:31.427 "num_base_bdevs_discovered": 0, 00:24:31.427 "num_base_bdevs_operational": 2, 00:24:31.427 "base_bdevs_list": [ 00:24:31.427 { 00:24:31.427 "name": "BaseBdev1", 00:24:31.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.427 "is_configured": false, 00:24:31.427 "data_offset": 0, 00:24:31.427 "data_size": 0 00:24:31.427 }, 00:24:31.427 { 00:24:31.427 "name": "BaseBdev2", 00:24:31.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.427 "is_configured": false, 00:24:31.427 "data_offset": 0, 00:24:31.427 "data_size": 0 00:24:31.427 } 00:24:31.427 ] 00:24:31.427 }' 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:31.427 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:31.993 [2024-10-15 09:24:15.732170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:31.993 [2024-10-15 09:24:15.732218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:31.993 [2024-10-15 09:24:15.740175] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:31.993 [2024-10-15 09:24:15.740229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:31.993 [2024-10-15 09:24:15.740245] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:31.993 [2024-10-15 09:24:15.740265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:31.993 [2024-10-15 09:24:15.791372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:31.993 BaseBdev1 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.993 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:31.993 [ 00:24:31.993 { 00:24:31.993 "name": "BaseBdev1", 00:24:31.993 "aliases": [ 00:24:31.993 "09e9b71d-aa5f-4e2b-9cb3-c0648e40a34e" 00:24:31.993 ], 00:24:31.993 "product_name": "Malloc disk", 00:24:31.993 "block_size": 4096, 00:24:31.993 "num_blocks": 8192, 00:24:31.993 "uuid": "09e9b71d-aa5f-4e2b-9cb3-c0648e40a34e", 00:24:31.993 "md_size": 32, 00:24:31.993 "md_interleave": false, 00:24:31.993 "dif_type": 0, 00:24:31.993 "assigned_rate_limits": { 00:24:31.993 "rw_ios_per_sec": 0, 00:24:31.993 "rw_mbytes_per_sec": 0, 00:24:31.993 "r_mbytes_per_sec": 0, 00:24:31.993 "w_mbytes_per_sec": 0 00:24:31.993 }, 00:24:31.993 "claimed": true, 00:24:31.994 "claim_type": "exclusive_write", 00:24:31.994 "zoned": false, 00:24:31.994 "supported_io_types": { 00:24:31.994 "read": true, 00:24:31.994 "write": true, 00:24:31.994 "unmap": true, 00:24:31.994 "flush": true, 00:24:31.994 "reset": true, 00:24:31.994 "nvme_admin": false, 00:24:31.994 "nvme_io": false, 00:24:31.994 "nvme_io_md": false, 00:24:31.994 "write_zeroes": true, 00:24:31.994 "zcopy": true, 00:24:31.994 "get_zone_info": false, 00:24:31.994 "zone_management": false, 00:24:31.994 "zone_append": false, 00:24:31.994 "compare": false, 00:24:31.994 "compare_and_write": false, 00:24:31.994 "abort": true, 00:24:31.994 "seek_hole": false, 00:24:31.994 "seek_data": false, 00:24:31.994 "copy": true, 00:24:31.994 "nvme_iov_md": false 00:24:31.994 }, 00:24:31.994 "memory_domains": [ 00:24:31.994 { 00:24:31.994 "dma_device_id": "system", 00:24:31.994 "dma_device_type": 1 00:24:31.994 }, 00:24:31.994 { 00:24:31.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.994 "dma_device_type": 2 00:24:31.994 } 00:24:31.994 ], 00:24:31.994 "driver_specific": {} 00:24:31.994 } 00:24:31.994 ] 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:31.994 "name": "Existed_Raid", 00:24:31.994 "uuid": "abf51db9-9ae6-4242-92a1-da9d48df17dd", 00:24:31.994 "strip_size_kb": 0, 00:24:31.994 "state": "configuring", 00:24:31.994 "raid_level": "raid1", 00:24:31.994 "superblock": true, 00:24:31.994 "num_base_bdevs": 2, 00:24:31.994 "num_base_bdevs_discovered": 1, 00:24:31.994 "num_base_bdevs_operational": 2, 00:24:31.994 "base_bdevs_list": [ 00:24:31.994 { 00:24:31.994 "name": "BaseBdev1", 00:24:31.994 "uuid": "09e9b71d-aa5f-4e2b-9cb3-c0648e40a34e", 00:24:31.994 "is_configured": true, 00:24:31.994 "data_offset": 256, 00:24:31.994 "data_size": 7936 00:24:31.994 }, 00:24:31.994 { 00:24:31.994 "name": "BaseBdev2", 00:24:31.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.994 "is_configured": false, 00:24:31.994 "data_offset": 0, 00:24:31.994 "data_size": 0 00:24:31.994 } 00:24:31.994 ] 00:24:31.994 }' 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:31.994 09:24:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:32.560 [2024-10-15 09:24:16.399654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:32.560 [2024-10-15 09:24:16.399873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:32.560 [2024-10-15 09:24:16.407699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:32.560 [2024-10-15 09:24:16.410475] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:32.560 [2024-10-15 09:24:16.410530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:32.560 "name": "Existed_Raid", 00:24:32.560 "uuid": "47db3cb9-d76a-4589-8f6d-3f3b81f6bf2f", 00:24:32.560 "strip_size_kb": 0, 00:24:32.560 "state": "configuring", 00:24:32.560 "raid_level": "raid1", 00:24:32.560 "superblock": true, 00:24:32.560 "num_base_bdevs": 2, 00:24:32.560 "num_base_bdevs_discovered": 1, 00:24:32.560 "num_base_bdevs_operational": 2, 00:24:32.560 "base_bdevs_list": [ 00:24:32.560 { 00:24:32.560 "name": "BaseBdev1", 00:24:32.560 "uuid": "09e9b71d-aa5f-4e2b-9cb3-c0648e40a34e", 00:24:32.560 "is_configured": true, 00:24:32.560 "data_offset": 256, 00:24:32.560 "data_size": 7936 00:24:32.560 }, 00:24:32.560 { 00:24:32.560 "name": "BaseBdev2", 00:24:32.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.560 "is_configured": false, 00:24:32.560 "data_offset": 0, 00:24:32.560 "data_size": 0 00:24:32.560 } 00:24:32.560 ] 00:24:32.560 }' 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:32.560 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.126 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:24:33.126 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.126 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.126 [2024-10-15 09:24:16.983803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:33.126 [2024-10-15 09:24:16.984407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:33.126 [2024-10-15 09:24:16.984435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:33.126 [2024-10-15 09:24:16.984543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:33.126 [2024-10-15 09:24:16.984705] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:33.126 [2024-10-15 09:24:16.984729] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:33.126 BaseBdev2 00:24:33.126 [2024-10-15 09:24:16.984847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:33.126 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.126 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:33.126 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:24:33.126 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:33.126 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:24:33.126 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:33.126 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:33.126 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:24:33.126 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.126 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.126 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.126 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:33.126 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.126 09:24:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.126 [ 00:24:33.126 { 00:24:33.126 "name": "BaseBdev2", 00:24:33.126 "aliases": [ 00:24:33.126 "8236f2ba-812a-4093-bf36-5737d376847a" 00:24:33.126 ], 00:24:33.126 "product_name": "Malloc disk", 00:24:33.126 "block_size": 4096, 00:24:33.126 "num_blocks": 8192, 00:24:33.126 "uuid": "8236f2ba-812a-4093-bf36-5737d376847a", 00:24:33.126 "md_size": 32, 00:24:33.126 "md_interleave": false, 00:24:33.126 "dif_type": 0, 00:24:33.126 "assigned_rate_limits": { 00:24:33.126 "rw_ios_per_sec": 0, 00:24:33.126 "rw_mbytes_per_sec": 0, 00:24:33.126 "r_mbytes_per_sec": 0, 00:24:33.126 "w_mbytes_per_sec": 0 00:24:33.126 }, 00:24:33.126 "claimed": true, 00:24:33.126 "claim_type": "exclusive_write", 00:24:33.126 "zoned": false, 00:24:33.126 "supported_io_types": { 00:24:33.126 "read": true, 00:24:33.126 "write": true, 00:24:33.126 "unmap": true, 00:24:33.126 "flush": true, 00:24:33.126 "reset": true, 00:24:33.126 "nvme_admin": false, 00:24:33.126 "nvme_io": false, 00:24:33.126 "nvme_io_md": false, 00:24:33.126 "write_zeroes": true, 00:24:33.126 "zcopy": true, 00:24:33.126 "get_zone_info": false, 00:24:33.126 "zone_management": false, 00:24:33.126 "zone_append": false, 00:24:33.126 "compare": false, 00:24:33.126 "compare_and_write": false, 00:24:33.126 "abort": true, 00:24:33.126 "seek_hole": false, 00:24:33.126 "seek_data": false, 00:24:33.126 "copy": true, 00:24:33.126 "nvme_iov_md": false 00:24:33.126 }, 00:24:33.126 "memory_domains": [ 00:24:33.126 { 00:24:33.126 "dma_device_id": "system", 00:24:33.126 "dma_device_type": 1 00:24:33.126 }, 00:24:33.126 { 00:24:33.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.126 "dma_device_type": 2 00:24:33.126 } 00:24:33.126 ], 00:24:33.126 "driver_specific": {} 00:24:33.126 } 00:24:33.126 ] 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.126 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.388 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:33.388 "name": "Existed_Raid", 00:24:33.388 "uuid": "47db3cb9-d76a-4589-8f6d-3f3b81f6bf2f", 00:24:33.388 "strip_size_kb": 0, 00:24:33.388 "state": "online", 00:24:33.388 "raid_level": "raid1", 00:24:33.388 "superblock": true, 00:24:33.388 "num_base_bdevs": 2, 00:24:33.388 "num_base_bdevs_discovered": 2, 00:24:33.388 "num_base_bdevs_operational": 2, 00:24:33.388 "base_bdevs_list": [ 00:24:33.388 { 00:24:33.388 "name": "BaseBdev1", 00:24:33.388 "uuid": "09e9b71d-aa5f-4e2b-9cb3-c0648e40a34e", 00:24:33.388 "is_configured": true, 00:24:33.388 "data_offset": 256, 00:24:33.388 "data_size": 7936 00:24:33.388 }, 00:24:33.388 { 00:24:33.388 "name": "BaseBdev2", 00:24:33.388 "uuid": "8236f2ba-812a-4093-bf36-5737d376847a", 00:24:33.388 "is_configured": true, 00:24:33.388 "data_offset": 256, 00:24:33.388 "data_size": 7936 00:24:33.388 } 00:24:33.388 ] 00:24:33.388 }' 00:24:33.388 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:33.388 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.650 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:33.650 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:33.650 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:33.650 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:33.650 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:24:33.650 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:33.650 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:33.650 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:33.650 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.650 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.650 [2024-10-15 09:24:17.548569] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:33.650 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:33.908 "name": "Existed_Raid", 00:24:33.908 "aliases": [ 00:24:33.908 "47db3cb9-d76a-4589-8f6d-3f3b81f6bf2f" 00:24:33.908 ], 00:24:33.908 "product_name": "Raid Volume", 00:24:33.908 "block_size": 4096, 00:24:33.908 "num_blocks": 7936, 00:24:33.908 "uuid": "47db3cb9-d76a-4589-8f6d-3f3b81f6bf2f", 00:24:33.908 "md_size": 32, 00:24:33.908 "md_interleave": false, 00:24:33.908 "dif_type": 0, 00:24:33.908 "assigned_rate_limits": { 00:24:33.908 "rw_ios_per_sec": 0, 00:24:33.908 "rw_mbytes_per_sec": 0, 00:24:33.908 "r_mbytes_per_sec": 0, 00:24:33.908 "w_mbytes_per_sec": 0 00:24:33.908 }, 00:24:33.908 "claimed": false, 00:24:33.908 "zoned": false, 00:24:33.908 "supported_io_types": { 00:24:33.908 "read": true, 00:24:33.908 "write": true, 00:24:33.908 "unmap": false, 00:24:33.908 "flush": false, 00:24:33.908 "reset": true, 00:24:33.908 "nvme_admin": false, 00:24:33.908 "nvme_io": false, 00:24:33.908 "nvme_io_md": false, 00:24:33.908 "write_zeroes": true, 00:24:33.908 "zcopy": false, 00:24:33.908 "get_zone_info": false, 00:24:33.908 "zone_management": false, 00:24:33.908 "zone_append": false, 00:24:33.908 "compare": false, 00:24:33.908 "compare_and_write": false, 00:24:33.908 "abort": false, 00:24:33.908 "seek_hole": false, 00:24:33.908 "seek_data": false, 00:24:33.908 "copy": false, 00:24:33.908 "nvme_iov_md": false 00:24:33.908 }, 00:24:33.908 "memory_domains": [ 00:24:33.908 { 00:24:33.908 "dma_device_id": "system", 00:24:33.908 "dma_device_type": 1 00:24:33.908 }, 00:24:33.908 { 00:24:33.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.908 "dma_device_type": 2 00:24:33.908 }, 00:24:33.908 { 00:24:33.908 "dma_device_id": "system", 00:24:33.908 "dma_device_type": 1 00:24:33.908 }, 00:24:33.908 { 00:24:33.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.908 "dma_device_type": 2 00:24:33.908 } 00:24:33.908 ], 00:24:33.908 "driver_specific": { 00:24:33.908 "raid": { 00:24:33.908 "uuid": "47db3cb9-d76a-4589-8f6d-3f3b81f6bf2f", 00:24:33.908 "strip_size_kb": 0, 00:24:33.908 "state": "online", 00:24:33.908 "raid_level": "raid1", 00:24:33.908 "superblock": true, 00:24:33.908 "num_base_bdevs": 2, 00:24:33.908 "num_base_bdevs_discovered": 2, 00:24:33.908 "num_base_bdevs_operational": 2, 00:24:33.908 "base_bdevs_list": [ 00:24:33.908 { 00:24:33.908 "name": "BaseBdev1", 00:24:33.908 "uuid": "09e9b71d-aa5f-4e2b-9cb3-c0648e40a34e", 00:24:33.908 "is_configured": true, 00:24:33.908 "data_offset": 256, 00:24:33.908 "data_size": 7936 00:24:33.908 }, 00:24:33.908 { 00:24:33.908 "name": "BaseBdev2", 00:24:33.908 "uuid": "8236f2ba-812a-4093-bf36-5737d376847a", 00:24:33.908 "is_configured": true, 00:24:33.908 "data_offset": 256, 00:24:33.908 "data_size": 7936 00:24:33.908 } 00:24:33.908 ] 00:24:33.908 } 00:24:33.908 } 00:24:33.908 }' 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:33.908 BaseBdev2' 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.908 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:34.166 [2024-10-15 09:24:17.836325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.166 09:24:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:34.166 "name": "Existed_Raid", 00:24:34.166 "uuid": "47db3cb9-d76a-4589-8f6d-3f3b81f6bf2f", 00:24:34.166 "strip_size_kb": 0, 00:24:34.166 "state": "online", 00:24:34.166 "raid_level": "raid1", 00:24:34.166 "superblock": true, 00:24:34.166 "num_base_bdevs": 2, 00:24:34.166 "num_base_bdevs_discovered": 1, 00:24:34.166 "num_base_bdevs_operational": 1, 00:24:34.166 "base_bdevs_list": [ 00:24:34.166 { 00:24:34.166 "name": null, 00:24:34.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.166 "is_configured": false, 00:24:34.166 "data_offset": 0, 00:24:34.166 "data_size": 7936 00:24:34.166 }, 00:24:34.166 { 00:24:34.166 "name": "BaseBdev2", 00:24:34.166 "uuid": "8236f2ba-812a-4093-bf36-5737d376847a", 00:24:34.166 "is_configured": true, 00:24:34.166 "data_offset": 256, 00:24:34.166 "data_size": 7936 00:24:34.166 } 00:24:34.166 ] 00:24:34.166 }' 00:24:34.166 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:34.166 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:34.732 [2024-10-15 09:24:18.518833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:34.732 [2024-10-15 09:24:18.519224] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:34.732 [2024-10-15 09:24:18.617284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:34.732 [2024-10-15 09:24:18.617598] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:34.732 [2024-10-15 09:24:18.617764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:34.732 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.997 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:34.997 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:34.997 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:34.997 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87943 00:24:34.997 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87943 ']' 00:24:34.997 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 87943 00:24:34.997 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:24:34.997 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:34.997 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87943 00:24:34.997 killing process with pid 87943 00:24:34.997 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:34.997 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:34.997 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87943' 00:24:34.997 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 87943 00:24:34.997 [2024-10-15 09:24:18.710851] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:34.997 09:24:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 87943 00:24:34.997 [2024-10-15 09:24:18.726489] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:35.932 ************************************ 00:24:35.932 END TEST raid_state_function_test_sb_md_separate 00:24:35.932 ************************************ 00:24:35.932 09:24:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:24:35.932 00:24:35.932 real 0m5.844s 00:24:35.932 user 0m8.757s 00:24:35.932 sys 0m0.920s 00:24:35.932 09:24:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:35.932 09:24:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:36.191 09:24:19 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:24:36.191 09:24:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:24:36.191 09:24:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:36.191 09:24:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:36.191 ************************************ 00:24:36.191 START TEST raid_superblock_test_md_separate 00:24:36.191 ************************************ 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:24:36.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88198 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88198 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 88198 ']' 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:36.191 09:24:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:36.191 [2024-10-15 09:24:20.002945] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:24:36.191 [2024-10-15 09:24:20.003139] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88198 ] 00:24:36.450 [2024-10-15 09:24:20.189032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.450 [2024-10-15 09:24:20.335972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.709 [2024-10-15 09:24:20.563544] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:36.709 [2024-10-15 09:24:20.563824] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:37.277 malloc1 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:37.277 [2024-10-15 09:24:21.137319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:37.277 [2024-10-15 09:24:21.137571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.277 [2024-10-15 09:24:21.137653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:37.277 [2024-10-15 09:24:21.137767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.277 [2024-10-15 09:24:21.140568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.277 [2024-10-15 09:24:21.140728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:37.277 pt1 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:37.277 malloc2 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.277 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:37.277 [2024-10-15 09:24:21.200423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:37.277 [2024-10-15 09:24:21.200635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.277 [2024-10-15 09:24:21.200684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:37.277 [2024-10-15 09:24:21.200701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.277 [2024-10-15 09:24:21.203569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.277 [2024-10-15 09:24:21.203617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:37.536 pt2 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:37.536 [2024-10-15 09:24:21.208493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:37.536 [2024-10-15 09:24:21.211505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:37.536 [2024-10-15 09:24:21.211885] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:37.536 [2024-10-15 09:24:21.212026] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:37.536 [2024-10-15 09:24:21.212214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:37.536 [2024-10-15 09:24:21.212514] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:37.536 [2024-10-15 09:24:21.212636] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:37.536 [2024-10-15 09:24:21.212976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:37.536 "name": "raid_bdev1", 00:24:37.536 "uuid": "1019e651-eeff-48cf-834b-cd2b2ec4eaa9", 00:24:37.536 "strip_size_kb": 0, 00:24:37.536 "state": "online", 00:24:37.536 "raid_level": "raid1", 00:24:37.536 "superblock": true, 00:24:37.536 "num_base_bdevs": 2, 00:24:37.536 "num_base_bdevs_discovered": 2, 00:24:37.536 "num_base_bdevs_operational": 2, 00:24:37.536 "base_bdevs_list": [ 00:24:37.536 { 00:24:37.536 "name": "pt1", 00:24:37.536 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:37.536 "is_configured": true, 00:24:37.536 "data_offset": 256, 00:24:37.536 "data_size": 7936 00:24:37.536 }, 00:24:37.536 { 00:24:37.536 "name": "pt2", 00:24:37.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:37.536 "is_configured": true, 00:24:37.536 "data_offset": 256, 00:24:37.536 "data_size": 7936 00:24:37.536 } 00:24:37.536 ] 00:24:37.536 }' 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:37.536 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:38.104 [2024-10-15 09:24:21.797574] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:38.104 "name": "raid_bdev1", 00:24:38.104 "aliases": [ 00:24:38.104 "1019e651-eeff-48cf-834b-cd2b2ec4eaa9" 00:24:38.104 ], 00:24:38.104 "product_name": "Raid Volume", 00:24:38.104 "block_size": 4096, 00:24:38.104 "num_blocks": 7936, 00:24:38.104 "uuid": "1019e651-eeff-48cf-834b-cd2b2ec4eaa9", 00:24:38.104 "md_size": 32, 00:24:38.104 "md_interleave": false, 00:24:38.104 "dif_type": 0, 00:24:38.104 "assigned_rate_limits": { 00:24:38.104 "rw_ios_per_sec": 0, 00:24:38.104 "rw_mbytes_per_sec": 0, 00:24:38.104 "r_mbytes_per_sec": 0, 00:24:38.104 "w_mbytes_per_sec": 0 00:24:38.104 }, 00:24:38.104 "claimed": false, 00:24:38.104 "zoned": false, 00:24:38.104 "supported_io_types": { 00:24:38.104 "read": true, 00:24:38.104 "write": true, 00:24:38.104 "unmap": false, 00:24:38.104 "flush": false, 00:24:38.104 "reset": true, 00:24:38.104 "nvme_admin": false, 00:24:38.104 "nvme_io": false, 00:24:38.104 "nvme_io_md": false, 00:24:38.104 "write_zeroes": true, 00:24:38.104 "zcopy": false, 00:24:38.104 "get_zone_info": false, 00:24:38.104 "zone_management": false, 00:24:38.104 "zone_append": false, 00:24:38.104 "compare": false, 00:24:38.104 "compare_and_write": false, 00:24:38.104 "abort": false, 00:24:38.104 "seek_hole": false, 00:24:38.104 "seek_data": false, 00:24:38.104 "copy": false, 00:24:38.104 "nvme_iov_md": false 00:24:38.104 }, 00:24:38.104 "memory_domains": [ 00:24:38.104 { 00:24:38.104 "dma_device_id": "system", 00:24:38.104 "dma_device_type": 1 00:24:38.104 }, 00:24:38.104 { 00:24:38.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.104 "dma_device_type": 2 00:24:38.104 }, 00:24:38.104 { 00:24:38.104 "dma_device_id": "system", 00:24:38.104 "dma_device_type": 1 00:24:38.104 }, 00:24:38.104 { 00:24:38.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.104 "dma_device_type": 2 00:24:38.104 } 00:24:38.104 ], 00:24:38.104 "driver_specific": { 00:24:38.104 "raid": { 00:24:38.104 "uuid": "1019e651-eeff-48cf-834b-cd2b2ec4eaa9", 00:24:38.104 "strip_size_kb": 0, 00:24:38.104 "state": "online", 00:24:38.104 "raid_level": "raid1", 00:24:38.104 "superblock": true, 00:24:38.104 "num_base_bdevs": 2, 00:24:38.104 "num_base_bdevs_discovered": 2, 00:24:38.104 "num_base_bdevs_operational": 2, 00:24:38.104 "base_bdevs_list": [ 00:24:38.104 { 00:24:38.104 "name": "pt1", 00:24:38.104 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:38.104 "is_configured": true, 00:24:38.104 "data_offset": 256, 00:24:38.104 "data_size": 7936 00:24:38.104 }, 00:24:38.104 { 00:24:38.104 "name": "pt2", 00:24:38.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:38.104 "is_configured": true, 00:24:38.104 "data_offset": 256, 00:24:38.104 "data_size": 7936 00:24:38.104 } 00:24:38.104 ] 00:24:38.104 } 00:24:38.104 } 00:24:38.104 }' 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:38.104 pt2' 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:38.104 09:24:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.104 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:38.104 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:38.104 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:38.104 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:38.104 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.104 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:38.104 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:38.363 [2024-10-15 09:24:22.081606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1019e651-eeff-48cf-834b-cd2b2ec4eaa9 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 1019e651-eeff-48cf-834b-cd2b2ec4eaa9 ']' 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:38.363 [2024-10-15 09:24:22.133224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:38.363 [2024-10-15 09:24:22.133261] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:38.363 [2024-10-15 09:24:22.133384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:38.363 [2024-10-15 09:24:22.133471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:38.363 [2024-10-15 09:24:22.133493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.363 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:38.363 [2024-10-15 09:24:22.273348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:38.363 [2024-10-15 09:24:22.276284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:38.363 [2024-10-15 09:24:22.276397] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:38.363 [2024-10-15 09:24:22.276490] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:38.363 [2024-10-15 09:24:22.276518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:38.363 [2024-10-15 09:24:22.276535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:24:38.363 request: 00:24:38.363 { 00:24:38.363 "name": "raid_bdev1", 00:24:38.364 "raid_level": "raid1", 00:24:38.364 "base_bdevs": [ 00:24:38.364 "malloc1", 00:24:38.364 "malloc2" 00:24:38.364 ], 00:24:38.364 "superblock": false, 00:24:38.364 "method": "bdev_raid_create", 00:24:38.364 "req_id": 1 00:24:38.364 } 00:24:38.364 Got JSON-RPC error response 00:24:38.364 response: 00:24:38.364 { 00:24:38.364 "code": -17, 00:24:38.364 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:38.364 } 00:24:38.364 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:38.364 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:24:38.364 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:38.364 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:38.364 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:38.364 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:38.364 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.364 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.364 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:38.622 [2024-10-15 09:24:22.341378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:38.622 [2024-10-15 09:24:22.341609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.622 [2024-10-15 09:24:22.341755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:38.622 [2024-10-15 09:24:22.341876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.622 [2024-10-15 09:24:22.344781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.622 [2024-10-15 09:24:22.344833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:38.622 [2024-10-15 09:24:22.344919] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:38.622 [2024-10-15 09:24:22.345002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:38.622 pt1 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.622 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:38.622 "name": "raid_bdev1", 00:24:38.622 "uuid": "1019e651-eeff-48cf-834b-cd2b2ec4eaa9", 00:24:38.622 "strip_size_kb": 0, 00:24:38.622 "state": "configuring", 00:24:38.622 "raid_level": "raid1", 00:24:38.622 "superblock": true, 00:24:38.622 "num_base_bdevs": 2, 00:24:38.622 "num_base_bdevs_discovered": 1, 00:24:38.622 "num_base_bdevs_operational": 2, 00:24:38.622 "base_bdevs_list": [ 00:24:38.622 { 00:24:38.622 "name": "pt1", 00:24:38.622 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:38.622 "is_configured": true, 00:24:38.622 "data_offset": 256, 00:24:38.622 "data_size": 7936 00:24:38.622 }, 00:24:38.622 { 00:24:38.622 "name": null, 00:24:38.622 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:38.622 "is_configured": false, 00:24:38.622 "data_offset": 256, 00:24:38.622 "data_size": 7936 00:24:38.622 } 00:24:38.622 ] 00:24:38.622 }' 00:24:38.623 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:38.623 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:39.190 [2024-10-15 09:24:22.861480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:39.190 [2024-10-15 09:24:22.861611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.190 [2024-10-15 09:24:22.861646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:39.190 [2024-10-15 09:24:22.861665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.190 [2024-10-15 09:24:22.862041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.190 [2024-10-15 09:24:22.862079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:39.190 [2024-10-15 09:24:22.862203] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:39.190 [2024-10-15 09:24:22.862245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:39.190 [2024-10-15 09:24:22.862430] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:39.190 [2024-10-15 09:24:22.862453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:39.190 [2024-10-15 09:24:22.862545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:39.190 [2024-10-15 09:24:22.862700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:39.190 [2024-10-15 09:24:22.862722] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:39.190 [2024-10-15 09:24:22.862854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:39.190 pt2 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.190 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:39.190 "name": "raid_bdev1", 00:24:39.190 "uuid": "1019e651-eeff-48cf-834b-cd2b2ec4eaa9", 00:24:39.190 "strip_size_kb": 0, 00:24:39.190 "state": "online", 00:24:39.190 "raid_level": "raid1", 00:24:39.190 "superblock": true, 00:24:39.190 "num_base_bdevs": 2, 00:24:39.190 "num_base_bdevs_discovered": 2, 00:24:39.190 "num_base_bdevs_operational": 2, 00:24:39.190 "base_bdevs_list": [ 00:24:39.191 { 00:24:39.191 "name": "pt1", 00:24:39.191 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:39.191 "is_configured": true, 00:24:39.191 "data_offset": 256, 00:24:39.191 "data_size": 7936 00:24:39.191 }, 00:24:39.191 { 00:24:39.191 "name": "pt2", 00:24:39.191 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:39.191 "is_configured": true, 00:24:39.191 "data_offset": 256, 00:24:39.191 "data_size": 7936 00:24:39.191 } 00:24:39.191 ] 00:24:39.191 }' 00:24:39.191 09:24:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:39.191 09:24:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:39.758 [2024-10-15 09:24:23.386007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:39.758 "name": "raid_bdev1", 00:24:39.758 "aliases": [ 00:24:39.758 "1019e651-eeff-48cf-834b-cd2b2ec4eaa9" 00:24:39.758 ], 00:24:39.758 "product_name": "Raid Volume", 00:24:39.758 "block_size": 4096, 00:24:39.758 "num_blocks": 7936, 00:24:39.758 "uuid": "1019e651-eeff-48cf-834b-cd2b2ec4eaa9", 00:24:39.758 "md_size": 32, 00:24:39.758 "md_interleave": false, 00:24:39.758 "dif_type": 0, 00:24:39.758 "assigned_rate_limits": { 00:24:39.758 "rw_ios_per_sec": 0, 00:24:39.758 "rw_mbytes_per_sec": 0, 00:24:39.758 "r_mbytes_per_sec": 0, 00:24:39.758 "w_mbytes_per_sec": 0 00:24:39.758 }, 00:24:39.758 "claimed": false, 00:24:39.758 "zoned": false, 00:24:39.758 "supported_io_types": { 00:24:39.758 "read": true, 00:24:39.758 "write": true, 00:24:39.758 "unmap": false, 00:24:39.758 "flush": false, 00:24:39.758 "reset": true, 00:24:39.758 "nvme_admin": false, 00:24:39.758 "nvme_io": false, 00:24:39.758 "nvme_io_md": false, 00:24:39.758 "write_zeroes": true, 00:24:39.758 "zcopy": false, 00:24:39.758 "get_zone_info": false, 00:24:39.758 "zone_management": false, 00:24:39.758 "zone_append": false, 00:24:39.758 "compare": false, 00:24:39.758 "compare_and_write": false, 00:24:39.758 "abort": false, 00:24:39.758 "seek_hole": false, 00:24:39.758 "seek_data": false, 00:24:39.758 "copy": false, 00:24:39.758 "nvme_iov_md": false 00:24:39.758 }, 00:24:39.758 "memory_domains": [ 00:24:39.758 { 00:24:39.758 "dma_device_id": "system", 00:24:39.758 "dma_device_type": 1 00:24:39.758 }, 00:24:39.758 { 00:24:39.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.758 "dma_device_type": 2 00:24:39.758 }, 00:24:39.758 { 00:24:39.758 "dma_device_id": "system", 00:24:39.758 "dma_device_type": 1 00:24:39.758 }, 00:24:39.758 { 00:24:39.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.758 "dma_device_type": 2 00:24:39.758 } 00:24:39.758 ], 00:24:39.758 "driver_specific": { 00:24:39.758 "raid": { 00:24:39.758 "uuid": "1019e651-eeff-48cf-834b-cd2b2ec4eaa9", 00:24:39.758 "strip_size_kb": 0, 00:24:39.758 "state": "online", 00:24:39.758 "raid_level": "raid1", 00:24:39.758 "superblock": true, 00:24:39.758 "num_base_bdevs": 2, 00:24:39.758 "num_base_bdevs_discovered": 2, 00:24:39.758 "num_base_bdevs_operational": 2, 00:24:39.758 "base_bdevs_list": [ 00:24:39.758 { 00:24:39.758 "name": "pt1", 00:24:39.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:39.758 "is_configured": true, 00:24:39.758 "data_offset": 256, 00:24:39.758 "data_size": 7936 00:24:39.758 }, 00:24:39.758 { 00:24:39.758 "name": "pt2", 00:24:39.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:39.758 "is_configured": true, 00:24:39.758 "data_offset": 256, 00:24:39.758 "data_size": 7936 00:24:39.758 } 00:24:39.758 ] 00:24:39.758 } 00:24:39.758 } 00:24:39.758 }' 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:39.758 pt2' 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:39.758 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:39.759 [2024-10-15 09:24:23.662086] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:39.759 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 1019e651-eeff-48cf-834b-cd2b2ec4eaa9 '!=' 1019e651-eeff-48cf-834b-cd2b2ec4eaa9 ']' 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:40.017 [2024-10-15 09:24:23.717854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.017 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:40.017 "name": "raid_bdev1", 00:24:40.017 "uuid": "1019e651-eeff-48cf-834b-cd2b2ec4eaa9", 00:24:40.017 "strip_size_kb": 0, 00:24:40.017 "state": "online", 00:24:40.017 "raid_level": "raid1", 00:24:40.017 "superblock": true, 00:24:40.017 "num_base_bdevs": 2, 00:24:40.017 "num_base_bdevs_discovered": 1, 00:24:40.017 "num_base_bdevs_operational": 1, 00:24:40.017 "base_bdevs_list": [ 00:24:40.017 { 00:24:40.017 "name": null, 00:24:40.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.018 "is_configured": false, 00:24:40.018 "data_offset": 0, 00:24:40.018 "data_size": 7936 00:24:40.018 }, 00:24:40.018 { 00:24:40.018 "name": "pt2", 00:24:40.018 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:40.018 "is_configured": true, 00:24:40.018 "data_offset": 256, 00:24:40.018 "data_size": 7936 00:24:40.018 } 00:24:40.018 ] 00:24:40.018 }' 00:24:40.018 09:24:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:40.018 09:24:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:40.585 [2024-10-15 09:24:24.261935] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:40.585 [2024-10-15 09:24:24.262175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:40.585 [2024-10-15 09:24:24.262479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:40.585 [2024-10-15 09:24:24.262691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:40.585 [2024-10-15 09:24:24.262887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:40.585 [2024-10-15 09:24:24.333865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:40.585 [2024-10-15 09:24:24.333947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:40.585 [2024-10-15 09:24:24.333975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:40.585 [2024-10-15 09:24:24.333993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:40.585 [2024-10-15 09:24:24.336878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:40.585 [2024-10-15 09:24:24.336946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:40.585 [2024-10-15 09:24:24.337052] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:40.585 [2024-10-15 09:24:24.337124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:40.585 [2024-10-15 09:24:24.337296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:40.585 [2024-10-15 09:24:24.337321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:40.585 [2024-10-15 09:24:24.337412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:40.585 [2024-10-15 09:24:24.337562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:40.585 [2024-10-15 09:24:24.337584] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:24:40.585 [2024-10-15 09:24:24.337714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.585 pt2 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.585 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:40.585 "name": "raid_bdev1", 00:24:40.585 "uuid": "1019e651-eeff-48cf-834b-cd2b2ec4eaa9", 00:24:40.585 "strip_size_kb": 0, 00:24:40.585 "state": "online", 00:24:40.585 "raid_level": "raid1", 00:24:40.585 "superblock": true, 00:24:40.585 "num_base_bdevs": 2, 00:24:40.585 "num_base_bdevs_discovered": 1, 00:24:40.585 "num_base_bdevs_operational": 1, 00:24:40.585 "base_bdevs_list": [ 00:24:40.585 { 00:24:40.585 "name": null, 00:24:40.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.585 "is_configured": false, 00:24:40.585 "data_offset": 256, 00:24:40.585 "data_size": 7936 00:24:40.585 }, 00:24:40.585 { 00:24:40.585 "name": "pt2", 00:24:40.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:40.585 "is_configured": true, 00:24:40.585 "data_offset": 256, 00:24:40.585 "data_size": 7936 00:24:40.585 } 00:24:40.585 ] 00:24:40.585 }' 00:24:40.586 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:40.586 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:41.153 [2024-10-15 09:24:24.845985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:41.153 [2024-10-15 09:24:24.846031] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:41.153 [2024-10-15 09:24:24.846183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:41.153 [2024-10-15 09:24:24.846264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:41.153 [2024-10-15 09:24:24.846281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:41.153 [2024-10-15 09:24:24.906054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:41.153 [2024-10-15 09:24:24.906171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:41.153 [2024-10-15 09:24:24.906208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:41.153 [2024-10-15 09:24:24.906225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:41.153 [2024-10-15 09:24:24.909106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:41.153 [2024-10-15 09:24:24.909170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:41.153 [2024-10-15 09:24:24.909263] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:41.153 [2024-10-15 09:24:24.909330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:41.153 [2024-10-15 09:24:24.909517] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:41.153 [2024-10-15 09:24:24.909544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:41.153 [2024-10-15 09:24:24.909574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:24:41.153 [2024-10-15 09:24:24.909648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:41.153 [2024-10-15 09:24:24.909751] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:24:41.153 [2024-10-15 09:24:24.909767] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:41.153 [2024-10-15 09:24:24.909865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:41.153 [2024-10-15 09:24:24.910009] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:24:41.153 [2024-10-15 09:24:24.910028] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:24:41.153 [2024-10-15 09:24:24.910275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:41.153 pt1 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:41.153 "name": "raid_bdev1", 00:24:41.153 "uuid": "1019e651-eeff-48cf-834b-cd2b2ec4eaa9", 00:24:41.153 "strip_size_kb": 0, 00:24:41.153 "state": "online", 00:24:41.153 "raid_level": "raid1", 00:24:41.153 "superblock": true, 00:24:41.153 "num_base_bdevs": 2, 00:24:41.153 "num_base_bdevs_discovered": 1, 00:24:41.153 "num_base_bdevs_operational": 1, 00:24:41.153 "base_bdevs_list": [ 00:24:41.153 { 00:24:41.153 "name": null, 00:24:41.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.153 "is_configured": false, 00:24:41.153 "data_offset": 256, 00:24:41.153 "data_size": 7936 00:24:41.153 }, 00:24:41.153 { 00:24:41.153 "name": "pt2", 00:24:41.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:41.153 "is_configured": true, 00:24:41.153 "data_offset": 256, 00:24:41.153 "data_size": 7936 00:24:41.153 } 00:24:41.153 ] 00:24:41.153 }' 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:41.153 09:24:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:41.728 [2024-10-15 09:24:25.478690] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 1019e651-eeff-48cf-834b-cd2b2ec4eaa9 '!=' 1019e651-eeff-48cf-834b-cd2b2ec4eaa9 ']' 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88198 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 88198 ']' 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 88198 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88198 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:41.728 killing process with pid 88198 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88198' 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 88198 00:24:41.728 [2024-10-15 09:24:25.550628] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:41.728 09:24:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 88198 00:24:41.728 [2024-10-15 09:24:25.550772] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:41.728 [2024-10-15 09:24:25.550862] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:41.728 [2024-10-15 09:24:25.550887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:24:41.987 [2024-10-15 09:24:25.764556] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:43.364 09:24:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:24:43.364 00:24:43.364 real 0m7.007s 00:24:43.364 user 0m10.975s 00:24:43.364 sys 0m1.099s 00:24:43.364 09:24:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:43.364 09:24:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:43.364 ************************************ 00:24:43.364 END TEST raid_superblock_test_md_separate 00:24:43.364 ************************************ 00:24:43.364 09:24:26 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:24:43.364 09:24:26 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:24:43.364 09:24:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:24:43.364 09:24:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:43.364 09:24:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:43.364 ************************************ 00:24:43.364 START TEST raid_rebuild_test_sb_md_separate 00:24:43.364 ************************************ 00:24:43.364 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:24:43.364 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:43.364 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:24:43.364 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:43.364 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:43.364 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:43.364 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:43.364 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:43.364 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:43.364 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:43.364 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:43.364 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:43.364 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:43.364 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:43.364 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:43.364 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:43.364 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:43.365 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:43.365 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:43.365 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:43.365 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:43.365 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:43.365 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:43.365 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:43.365 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:43.365 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88534 00:24:43.365 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88534 00:24:43.365 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 88534 ']' 00:24:43.365 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.365 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:43.365 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:43.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.365 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.365 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:43.365 09:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:43.365 [2024-10-15 09:24:27.085301] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:24:43.365 [2024-10-15 09:24:27.085490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88534 ] 00:24:43.365 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:43.365 Zero copy mechanism will not be used. 00:24:43.365 [2024-10-15 09:24:27.263143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.623 [2024-10-15 09:24:27.410685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.882 [2024-10-15 09:24:27.641861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:43.882 [2024-10-15 09:24:27.641906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:44.449 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:44.449 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:24:44.449 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:44.449 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:24:44.449 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.449 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:44.449 BaseBdev1_malloc 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:44.450 [2024-10-15 09:24:28.137855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:44.450 [2024-10-15 09:24:28.137945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.450 [2024-10-15 09:24:28.137978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:44.450 [2024-10-15 09:24:28.137998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.450 [2024-10-15 09:24:28.140708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.450 [2024-10-15 09:24:28.140777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:44.450 BaseBdev1 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:44.450 BaseBdev2_malloc 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:44.450 [2024-10-15 09:24:28.202161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:44.450 [2024-10-15 09:24:28.202235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.450 [2024-10-15 09:24:28.202266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:44.450 [2024-10-15 09:24:28.202285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.450 [2024-10-15 09:24:28.205016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.450 [2024-10-15 09:24:28.205078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:44.450 BaseBdev2 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:44.450 spare_malloc 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:44.450 spare_delay 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:44.450 [2024-10-15 09:24:28.285767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:44.450 [2024-10-15 09:24:28.285874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.450 [2024-10-15 09:24:28.285915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:44.450 [2024-10-15 09:24:28.285936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.450 [2024-10-15 09:24:28.289024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.450 [2024-10-15 09:24:28.289072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:44.450 spare 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:44.450 [2024-10-15 09:24:28.297908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:44.450 [2024-10-15 09:24:28.300829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:44.450 [2024-10-15 09:24:28.301168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:44.450 [2024-10-15 09:24:28.301211] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:44.450 [2024-10-15 09:24:28.301347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:44.450 [2024-10-15 09:24:28.301557] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:44.450 [2024-10-15 09:24:28.301582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:44.450 [2024-10-15 09:24:28.301782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:44.450 "name": "raid_bdev1", 00:24:44.450 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:24:44.450 "strip_size_kb": 0, 00:24:44.450 "state": "online", 00:24:44.450 "raid_level": "raid1", 00:24:44.450 "superblock": true, 00:24:44.450 "num_base_bdevs": 2, 00:24:44.450 "num_base_bdevs_discovered": 2, 00:24:44.450 "num_base_bdevs_operational": 2, 00:24:44.450 "base_bdevs_list": [ 00:24:44.450 { 00:24:44.450 "name": "BaseBdev1", 00:24:44.450 "uuid": "1db37bbc-eaa7-5102-be07-bb78628a1a0b", 00:24:44.450 "is_configured": true, 00:24:44.450 "data_offset": 256, 00:24:44.450 "data_size": 7936 00:24:44.450 }, 00:24:44.450 { 00:24:44.450 "name": "BaseBdev2", 00:24:44.450 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:24:44.450 "is_configured": true, 00:24:44.450 "data_offset": 256, 00:24:44.450 "data_size": 7936 00:24:44.450 } 00:24:44.450 ] 00:24:44.450 }' 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:44.450 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:45.019 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:45.019 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.019 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:45.019 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:45.019 [2024-10-15 09:24:28.826542] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:45.019 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.019 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:24:45.019 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:45.019 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.019 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.019 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:45.019 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.019 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:24:45.019 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:45.019 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:24:45.019 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:24:45.020 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:24:45.020 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:45.020 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:45.020 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:45.020 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:45.020 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:45.020 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:24:45.020 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:45.020 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:45.020 09:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:45.586 [2024-10-15 09:24:29.222412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:45.586 /dev/nbd0 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:45.586 1+0 records in 00:24:45.586 1+0 records out 00:24:45.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046908 s, 8.7 MB/s 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:45.586 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:24:45.587 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:24:45.587 09:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:24:46.523 7936+0 records in 00:24:46.523 7936+0 records out 00:24:46.523 32505856 bytes (33 MB, 31 MiB) copied, 0.931071 s, 34.9 MB/s 00:24:46.523 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:46.523 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:46.523 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:46.523 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:46.523 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:24:46.523 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:46.523 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:46.782 [2024-10-15 09:24:30.526866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:46.782 [2024-10-15 09:24:30.545161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:46.782 "name": "raid_bdev1", 00:24:46.782 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:24:46.782 "strip_size_kb": 0, 00:24:46.782 "state": "online", 00:24:46.782 "raid_level": "raid1", 00:24:46.782 "superblock": true, 00:24:46.782 "num_base_bdevs": 2, 00:24:46.782 "num_base_bdevs_discovered": 1, 00:24:46.782 "num_base_bdevs_operational": 1, 00:24:46.782 "base_bdevs_list": [ 00:24:46.782 { 00:24:46.782 "name": null, 00:24:46.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:46.782 "is_configured": false, 00:24:46.782 "data_offset": 0, 00:24:46.782 "data_size": 7936 00:24:46.782 }, 00:24:46.782 { 00:24:46.782 "name": "BaseBdev2", 00:24:46.782 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:24:46.782 "is_configured": true, 00:24:46.782 "data_offset": 256, 00:24:46.782 "data_size": 7936 00:24:46.782 } 00:24:46.782 ] 00:24:46.782 }' 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:46.782 09:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.350 09:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:47.350 09:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.350 09:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:47.350 [2024-10-15 09:24:31.089352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:47.350 [2024-10-15 09:24:31.103919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:24:47.350 09:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.350 09:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:47.350 [2024-10-15 09:24:31.106570] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:48.286 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:48.286 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:48.286 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:48.286 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:48.286 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:48.286 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.286 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.286 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.286 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.286 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.286 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:48.286 "name": "raid_bdev1", 00:24:48.286 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:24:48.286 "strip_size_kb": 0, 00:24:48.286 "state": "online", 00:24:48.286 "raid_level": "raid1", 00:24:48.286 "superblock": true, 00:24:48.286 "num_base_bdevs": 2, 00:24:48.286 "num_base_bdevs_discovered": 2, 00:24:48.286 "num_base_bdevs_operational": 2, 00:24:48.286 "process": { 00:24:48.286 "type": "rebuild", 00:24:48.286 "target": "spare", 00:24:48.286 "progress": { 00:24:48.286 "blocks": 2560, 00:24:48.286 "percent": 32 00:24:48.286 } 00:24:48.286 }, 00:24:48.286 "base_bdevs_list": [ 00:24:48.286 { 00:24:48.286 "name": "spare", 00:24:48.286 "uuid": "de612b69-d773-5c2c-9faa-577a7c6c5e7f", 00:24:48.286 "is_configured": true, 00:24:48.286 "data_offset": 256, 00:24:48.286 "data_size": 7936 00:24:48.286 }, 00:24:48.286 { 00:24:48.286 "name": "BaseBdev2", 00:24:48.286 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:24:48.286 "is_configured": true, 00:24:48.286 "data_offset": 256, 00:24:48.286 "data_size": 7936 00:24:48.286 } 00:24:48.286 ] 00:24:48.286 }' 00:24:48.286 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:48.615 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:48.615 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:48.615 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:48.615 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:48.615 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.615 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.615 [2024-10-15 09:24:32.268381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:48.615 [2024-10-15 09:24:32.318094] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:48.615 [2024-10-15 09:24:32.318196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:48.615 [2024-10-15 09:24:32.318229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:48.615 [2024-10-15 09:24:32.318246] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:48.616 "name": "raid_bdev1", 00:24:48.616 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:24:48.616 "strip_size_kb": 0, 00:24:48.616 "state": "online", 00:24:48.616 "raid_level": "raid1", 00:24:48.616 "superblock": true, 00:24:48.616 "num_base_bdevs": 2, 00:24:48.616 "num_base_bdevs_discovered": 1, 00:24:48.616 "num_base_bdevs_operational": 1, 00:24:48.616 "base_bdevs_list": [ 00:24:48.616 { 00:24:48.616 "name": null, 00:24:48.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.616 "is_configured": false, 00:24:48.616 "data_offset": 0, 00:24:48.616 "data_size": 7936 00:24:48.616 }, 00:24:48.616 { 00:24:48.616 "name": "BaseBdev2", 00:24:48.616 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:24:48.616 "is_configured": true, 00:24:48.616 "data_offset": 256, 00:24:48.616 "data_size": 7936 00:24:48.616 } 00:24:48.616 ] 00:24:48.616 }' 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:48.616 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:49.185 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:49.185 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:49.185 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:49.185 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:49.185 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:49.185 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.185 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.185 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.185 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:49.185 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.185 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:49.185 "name": "raid_bdev1", 00:24:49.185 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:24:49.185 "strip_size_kb": 0, 00:24:49.185 "state": "online", 00:24:49.185 "raid_level": "raid1", 00:24:49.185 "superblock": true, 00:24:49.185 "num_base_bdevs": 2, 00:24:49.185 "num_base_bdevs_discovered": 1, 00:24:49.185 "num_base_bdevs_operational": 1, 00:24:49.185 "base_bdevs_list": [ 00:24:49.185 { 00:24:49.185 "name": null, 00:24:49.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.185 "is_configured": false, 00:24:49.185 "data_offset": 0, 00:24:49.185 "data_size": 7936 00:24:49.185 }, 00:24:49.185 { 00:24:49.185 "name": "BaseBdev2", 00:24:49.185 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:24:49.185 "is_configured": true, 00:24:49.185 "data_offset": 256, 00:24:49.185 "data_size": 7936 00:24:49.185 } 00:24:49.185 ] 00:24:49.185 }' 00:24:49.185 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:49.185 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:49.185 09:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:49.185 09:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:49.185 09:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:49.185 09:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.185 09:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:49.185 [2024-10-15 09:24:33.045969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:49.185 [2024-10-15 09:24:33.059707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:24:49.185 09:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.185 09:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:49.185 [2024-10-15 09:24:33.062359] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:50.562 "name": "raid_bdev1", 00:24:50.562 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:24:50.562 "strip_size_kb": 0, 00:24:50.562 "state": "online", 00:24:50.562 "raid_level": "raid1", 00:24:50.562 "superblock": true, 00:24:50.562 "num_base_bdevs": 2, 00:24:50.562 "num_base_bdevs_discovered": 2, 00:24:50.562 "num_base_bdevs_operational": 2, 00:24:50.562 "process": { 00:24:50.562 "type": "rebuild", 00:24:50.562 "target": "spare", 00:24:50.562 "progress": { 00:24:50.562 "blocks": 2560, 00:24:50.562 "percent": 32 00:24:50.562 } 00:24:50.562 }, 00:24:50.562 "base_bdevs_list": [ 00:24:50.562 { 00:24:50.562 "name": "spare", 00:24:50.562 "uuid": "de612b69-d773-5c2c-9faa-577a7c6c5e7f", 00:24:50.562 "is_configured": true, 00:24:50.562 "data_offset": 256, 00:24:50.562 "data_size": 7936 00:24:50.562 }, 00:24:50.562 { 00:24:50.562 "name": "BaseBdev2", 00:24:50.562 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:24:50.562 "is_configured": true, 00:24:50.562 "data_offset": 256, 00:24:50.562 "data_size": 7936 00:24:50.562 } 00:24:50.562 ] 00:24:50.562 }' 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:50.562 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=785 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.562 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:50.562 "name": "raid_bdev1", 00:24:50.562 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:24:50.562 "strip_size_kb": 0, 00:24:50.562 "state": "online", 00:24:50.563 "raid_level": "raid1", 00:24:50.563 "superblock": true, 00:24:50.563 "num_base_bdevs": 2, 00:24:50.563 "num_base_bdevs_discovered": 2, 00:24:50.563 "num_base_bdevs_operational": 2, 00:24:50.563 "process": { 00:24:50.563 "type": "rebuild", 00:24:50.563 "target": "spare", 00:24:50.563 "progress": { 00:24:50.563 "blocks": 2816, 00:24:50.563 "percent": 35 00:24:50.563 } 00:24:50.563 }, 00:24:50.563 "base_bdevs_list": [ 00:24:50.563 { 00:24:50.563 "name": "spare", 00:24:50.563 "uuid": "de612b69-d773-5c2c-9faa-577a7c6c5e7f", 00:24:50.563 "is_configured": true, 00:24:50.563 "data_offset": 256, 00:24:50.563 "data_size": 7936 00:24:50.563 }, 00:24:50.563 { 00:24:50.563 "name": "BaseBdev2", 00:24:50.563 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:24:50.563 "is_configured": true, 00:24:50.563 "data_offset": 256, 00:24:50.563 "data_size": 7936 00:24:50.563 } 00:24:50.563 ] 00:24:50.563 }' 00:24:50.563 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:50.563 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:50.563 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:50.563 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:50.563 09:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:51.492 09:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:51.492 09:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:51.492 09:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:51.492 09:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:51.492 09:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:51.492 09:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:51.492 09:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.492 09:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.492 09:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.492 09:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.492 09:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.749 09:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:51.749 "name": "raid_bdev1", 00:24:51.749 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:24:51.749 "strip_size_kb": 0, 00:24:51.749 "state": "online", 00:24:51.749 "raid_level": "raid1", 00:24:51.749 "superblock": true, 00:24:51.749 "num_base_bdevs": 2, 00:24:51.749 "num_base_bdevs_discovered": 2, 00:24:51.749 "num_base_bdevs_operational": 2, 00:24:51.749 "process": { 00:24:51.749 "type": "rebuild", 00:24:51.749 "target": "spare", 00:24:51.749 "progress": { 00:24:51.749 "blocks": 5888, 00:24:51.749 "percent": 74 00:24:51.749 } 00:24:51.749 }, 00:24:51.749 "base_bdevs_list": [ 00:24:51.749 { 00:24:51.749 "name": "spare", 00:24:51.749 "uuid": "de612b69-d773-5c2c-9faa-577a7c6c5e7f", 00:24:51.749 "is_configured": true, 00:24:51.749 "data_offset": 256, 00:24:51.749 "data_size": 7936 00:24:51.749 }, 00:24:51.749 { 00:24:51.749 "name": "BaseBdev2", 00:24:51.749 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:24:51.749 "is_configured": true, 00:24:51.749 "data_offset": 256, 00:24:51.749 "data_size": 7936 00:24:51.749 } 00:24:51.749 ] 00:24:51.749 }' 00:24:51.749 09:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:51.749 09:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:51.749 09:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:51.749 09:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:51.749 09:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:52.315 [2024-10-15 09:24:36.192504] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:52.315 [2024-10-15 09:24:36.192629] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:52.315 [2024-10-15 09:24:36.192797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:52.917 "name": "raid_bdev1", 00:24:52.917 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:24:52.917 "strip_size_kb": 0, 00:24:52.917 "state": "online", 00:24:52.917 "raid_level": "raid1", 00:24:52.917 "superblock": true, 00:24:52.917 "num_base_bdevs": 2, 00:24:52.917 "num_base_bdevs_discovered": 2, 00:24:52.917 "num_base_bdevs_operational": 2, 00:24:52.917 "base_bdevs_list": [ 00:24:52.917 { 00:24:52.917 "name": "spare", 00:24:52.917 "uuid": "de612b69-d773-5c2c-9faa-577a7c6c5e7f", 00:24:52.917 "is_configured": true, 00:24:52.917 "data_offset": 256, 00:24:52.917 "data_size": 7936 00:24:52.917 }, 00:24:52.917 { 00:24:52.917 "name": "BaseBdev2", 00:24:52.917 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:24:52.917 "is_configured": true, 00:24:52.917 "data_offset": 256, 00:24:52.917 "data_size": 7936 00:24:52.917 } 00:24:52.917 ] 00:24:52.917 }' 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:52.917 "name": "raid_bdev1", 00:24:52.917 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:24:52.917 "strip_size_kb": 0, 00:24:52.917 "state": "online", 00:24:52.917 "raid_level": "raid1", 00:24:52.917 "superblock": true, 00:24:52.917 "num_base_bdevs": 2, 00:24:52.917 "num_base_bdevs_discovered": 2, 00:24:52.917 "num_base_bdevs_operational": 2, 00:24:52.917 "base_bdevs_list": [ 00:24:52.917 { 00:24:52.917 "name": "spare", 00:24:52.917 "uuid": "de612b69-d773-5c2c-9faa-577a7c6c5e7f", 00:24:52.917 "is_configured": true, 00:24:52.917 "data_offset": 256, 00:24:52.917 "data_size": 7936 00:24:52.917 }, 00:24:52.917 { 00:24:52.917 "name": "BaseBdev2", 00:24:52.917 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:24:52.917 "is_configured": true, 00:24:52.917 "data_offset": 256, 00:24:52.917 "data_size": 7936 00:24:52.917 } 00:24:52.917 ] 00:24:52.917 }' 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:52.917 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:53.188 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:53.188 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:53.188 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:53.188 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:53.188 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:53.188 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:53.189 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:53.189 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:53.189 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:53.189 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:53.189 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:53.189 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.189 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.189 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.189 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:53.189 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.189 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:53.189 "name": "raid_bdev1", 00:24:53.189 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:24:53.189 "strip_size_kb": 0, 00:24:53.189 "state": "online", 00:24:53.189 "raid_level": "raid1", 00:24:53.189 "superblock": true, 00:24:53.189 "num_base_bdevs": 2, 00:24:53.189 "num_base_bdevs_discovered": 2, 00:24:53.189 "num_base_bdevs_operational": 2, 00:24:53.189 "base_bdevs_list": [ 00:24:53.189 { 00:24:53.189 "name": "spare", 00:24:53.189 "uuid": "de612b69-d773-5c2c-9faa-577a7c6c5e7f", 00:24:53.189 "is_configured": true, 00:24:53.189 "data_offset": 256, 00:24:53.189 "data_size": 7936 00:24:53.189 }, 00:24:53.189 { 00:24:53.189 "name": "BaseBdev2", 00:24:53.189 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:24:53.189 "is_configured": true, 00:24:53.189 "data_offset": 256, 00:24:53.189 "data_size": 7936 00:24:53.189 } 00:24:53.189 ] 00:24:53.189 }' 00:24:53.189 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:53.189 09:24:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:53.757 [2024-10-15 09:24:37.397178] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:53.757 [2024-10-15 09:24:37.397226] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:53.757 [2024-10-15 09:24:37.397349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:53.757 [2024-10-15 09:24:37.397456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:53.757 [2024-10-15 09:24:37.397480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:53.757 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:54.016 /dev/nbd0 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:54.016 1+0 records in 00:24:54.016 1+0 records out 00:24:54.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003116 s, 13.1 MB/s 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:54.016 09:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:54.274 /dev/nbd1 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:54.274 1+0 records in 00:24:54.274 1+0 records out 00:24:54.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486771 s, 8.4 MB/s 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:54.274 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:54.533 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:54.533 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:54.533 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:54.533 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:54.533 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:24:54.533 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:54.533 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:54.791 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:54.791 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:54.791 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:54.791 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:54.791 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:54.791 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:54.791 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:24:54.791 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:24:54.791 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:54.791 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:55.050 [2024-10-15 09:24:38.935110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:55.050 [2024-10-15 09:24:38.935192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.050 [2024-10-15 09:24:38.935229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:55.050 [2024-10-15 09:24:38.935246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.050 [2024-10-15 09:24:38.938064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.050 [2024-10-15 09:24:38.938111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:55.050 [2024-10-15 09:24:38.938231] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:55.050 [2024-10-15 09:24:38.938310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:55.050 [2024-10-15 09:24:38.938509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:55.050 spare 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.050 09:24:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:55.309 [2024-10-15 09:24:39.038647] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:55.309 [2024-10-15 09:24:39.038742] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:55.309 [2024-10-15 09:24:39.038934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:24:55.309 [2024-10-15 09:24:39.039204] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:55.309 [2024-10-15 09:24:39.039221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:55.309 [2024-10-15 09:24:39.039440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:55.309 "name": "raid_bdev1", 00:24:55.309 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:24:55.309 "strip_size_kb": 0, 00:24:55.309 "state": "online", 00:24:55.309 "raid_level": "raid1", 00:24:55.309 "superblock": true, 00:24:55.309 "num_base_bdevs": 2, 00:24:55.309 "num_base_bdevs_discovered": 2, 00:24:55.309 "num_base_bdevs_operational": 2, 00:24:55.309 "base_bdevs_list": [ 00:24:55.309 { 00:24:55.309 "name": "spare", 00:24:55.309 "uuid": "de612b69-d773-5c2c-9faa-577a7c6c5e7f", 00:24:55.309 "is_configured": true, 00:24:55.309 "data_offset": 256, 00:24:55.309 "data_size": 7936 00:24:55.309 }, 00:24:55.309 { 00:24:55.309 "name": "BaseBdev2", 00:24:55.309 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:24:55.309 "is_configured": true, 00:24:55.309 "data_offset": 256, 00:24:55.309 "data_size": 7936 00:24:55.309 } 00:24:55.309 ] 00:24:55.309 }' 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:55.309 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:55.875 "name": "raid_bdev1", 00:24:55.875 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:24:55.875 "strip_size_kb": 0, 00:24:55.875 "state": "online", 00:24:55.875 "raid_level": "raid1", 00:24:55.875 "superblock": true, 00:24:55.875 "num_base_bdevs": 2, 00:24:55.875 "num_base_bdevs_discovered": 2, 00:24:55.875 "num_base_bdevs_operational": 2, 00:24:55.875 "base_bdevs_list": [ 00:24:55.875 { 00:24:55.875 "name": "spare", 00:24:55.875 "uuid": "de612b69-d773-5c2c-9faa-577a7c6c5e7f", 00:24:55.875 "is_configured": true, 00:24:55.875 "data_offset": 256, 00:24:55.875 "data_size": 7936 00:24:55.875 }, 00:24:55.875 { 00:24:55.875 "name": "BaseBdev2", 00:24:55.875 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:24:55.875 "is_configured": true, 00:24:55.875 "data_offset": 256, 00:24:55.875 "data_size": 7936 00:24:55.875 } 00:24:55.875 ] 00:24:55.875 }' 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:55.875 [2024-10-15 09:24:39.763618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:55.875 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.134 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:56.134 "name": "raid_bdev1", 00:24:56.134 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:24:56.134 "strip_size_kb": 0, 00:24:56.134 "state": "online", 00:24:56.134 "raid_level": "raid1", 00:24:56.134 "superblock": true, 00:24:56.134 "num_base_bdevs": 2, 00:24:56.134 "num_base_bdevs_discovered": 1, 00:24:56.134 "num_base_bdevs_operational": 1, 00:24:56.134 "base_bdevs_list": [ 00:24:56.134 { 00:24:56.134 "name": null, 00:24:56.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.134 "is_configured": false, 00:24:56.134 "data_offset": 0, 00:24:56.134 "data_size": 7936 00:24:56.134 }, 00:24:56.134 { 00:24:56.134 "name": "BaseBdev2", 00:24:56.134 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:24:56.134 "is_configured": true, 00:24:56.134 "data_offset": 256, 00:24:56.134 "data_size": 7936 00:24:56.134 } 00:24:56.134 ] 00:24:56.134 }' 00:24:56.134 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:56.134 09:24:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:56.700 09:24:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:56.700 09:24:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.700 09:24:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:56.700 [2024-10-15 09:24:40.335871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:56.700 [2024-10-15 09:24:40.336200] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:56.700 [2024-10-15 09:24:40.336229] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:56.700 [2024-10-15 09:24:40.336288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:56.700 [2024-10-15 09:24:40.349680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:24:56.700 09:24:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.700 09:24:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:56.700 [2024-10-15 09:24:40.352361] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:57.637 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:57.637 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:57.637 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:57.637 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:57.637 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:57.637 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:57.637 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.637 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.637 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.637 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.637 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:57.637 "name": "raid_bdev1", 00:24:57.637 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:24:57.637 "strip_size_kb": 0, 00:24:57.637 "state": "online", 00:24:57.637 "raid_level": "raid1", 00:24:57.637 "superblock": true, 00:24:57.637 "num_base_bdevs": 2, 00:24:57.637 "num_base_bdevs_discovered": 2, 00:24:57.637 "num_base_bdevs_operational": 2, 00:24:57.637 "process": { 00:24:57.637 "type": "rebuild", 00:24:57.637 "target": "spare", 00:24:57.637 "progress": { 00:24:57.637 "blocks": 2560, 00:24:57.637 "percent": 32 00:24:57.637 } 00:24:57.637 }, 00:24:57.637 "base_bdevs_list": [ 00:24:57.637 { 00:24:57.637 "name": "spare", 00:24:57.637 "uuid": "de612b69-d773-5c2c-9faa-577a7c6c5e7f", 00:24:57.637 "is_configured": true, 00:24:57.637 "data_offset": 256, 00:24:57.637 "data_size": 7936 00:24:57.637 }, 00:24:57.637 { 00:24:57.637 "name": "BaseBdev2", 00:24:57.637 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:24:57.637 "is_configured": true, 00:24:57.637 "data_offset": 256, 00:24:57.637 "data_size": 7936 00:24:57.637 } 00:24:57.637 ] 00:24:57.637 }' 00:24:57.637 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:57.637 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:57.637 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:57.637 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:57.637 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:57.637 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.637 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.637 [2024-10-15 09:24:41.522587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:57.637 [2024-10-15 09:24:41.564228] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:57.637 [2024-10-15 09:24:41.564370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:57.637 [2024-10-15 09:24:41.564396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:57.896 [2024-10-15 09:24:41.564429] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:57.896 "name": "raid_bdev1", 00:24:57.896 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:24:57.896 "strip_size_kb": 0, 00:24:57.896 "state": "online", 00:24:57.896 "raid_level": "raid1", 00:24:57.896 "superblock": true, 00:24:57.896 "num_base_bdevs": 2, 00:24:57.896 "num_base_bdevs_discovered": 1, 00:24:57.896 "num_base_bdevs_operational": 1, 00:24:57.896 "base_bdevs_list": [ 00:24:57.896 { 00:24:57.896 "name": null, 00:24:57.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.896 "is_configured": false, 00:24:57.896 "data_offset": 0, 00:24:57.896 "data_size": 7936 00:24:57.896 }, 00:24:57.896 { 00:24:57.896 "name": "BaseBdev2", 00:24:57.896 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:24:57.896 "is_configured": true, 00:24:57.896 "data_offset": 256, 00:24:57.896 "data_size": 7936 00:24:57.896 } 00:24:57.896 ] 00:24:57.896 }' 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:57.896 09:24:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.464 09:24:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:58.464 09:24:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.464 09:24:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.464 [2024-10-15 09:24:42.160213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:58.464 [2024-10-15 09:24:42.160307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:58.464 [2024-10-15 09:24:42.160348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:58.464 [2024-10-15 09:24:42.160368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:58.464 [2024-10-15 09:24:42.160728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:58.464 [2024-10-15 09:24:42.160772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:58.464 [2024-10-15 09:24:42.160861] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:58.464 [2024-10-15 09:24:42.160886] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:58.464 [2024-10-15 09:24:42.160900] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:58.464 [2024-10-15 09:24:42.160945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:58.464 [2024-10-15 09:24:42.174070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:24:58.464 spare 00:24:58.464 09:24:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.464 09:24:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:58.464 [2024-10-15 09:24:42.176668] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:59.401 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:59.401 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:59.402 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:59.402 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:59.402 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:59.402 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.402 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.402 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.402 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.402 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.402 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:59.402 "name": "raid_bdev1", 00:24:59.402 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:24:59.402 "strip_size_kb": 0, 00:24:59.402 "state": "online", 00:24:59.402 "raid_level": "raid1", 00:24:59.402 "superblock": true, 00:24:59.402 "num_base_bdevs": 2, 00:24:59.402 "num_base_bdevs_discovered": 2, 00:24:59.402 "num_base_bdevs_operational": 2, 00:24:59.402 "process": { 00:24:59.402 "type": "rebuild", 00:24:59.402 "target": "spare", 00:24:59.402 "progress": { 00:24:59.402 "blocks": 2560, 00:24:59.402 "percent": 32 00:24:59.402 } 00:24:59.402 }, 00:24:59.402 "base_bdevs_list": [ 00:24:59.402 { 00:24:59.402 "name": "spare", 00:24:59.402 "uuid": "de612b69-d773-5c2c-9faa-577a7c6c5e7f", 00:24:59.402 "is_configured": true, 00:24:59.402 "data_offset": 256, 00:24:59.402 "data_size": 7936 00:24:59.402 }, 00:24:59.402 { 00:24:59.402 "name": "BaseBdev2", 00:24:59.402 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:24:59.402 "is_configured": true, 00:24:59.402 "data_offset": 256, 00:24:59.402 "data_size": 7936 00:24:59.402 } 00:24:59.402 ] 00:24:59.402 }' 00:24:59.402 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:59.402 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:59.402 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.661 [2024-10-15 09:24:43.347445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:59.661 [2024-10-15 09:24:43.388316] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:59.661 [2024-10-15 09:24:43.388470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:59.661 [2024-10-15 09:24:43.388501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:59.661 [2024-10-15 09:24:43.388513] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:59.661 "name": "raid_bdev1", 00:24:59.661 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:24:59.661 "strip_size_kb": 0, 00:24:59.661 "state": "online", 00:24:59.661 "raid_level": "raid1", 00:24:59.661 "superblock": true, 00:24:59.661 "num_base_bdevs": 2, 00:24:59.661 "num_base_bdevs_discovered": 1, 00:24:59.661 "num_base_bdevs_operational": 1, 00:24:59.661 "base_bdevs_list": [ 00:24:59.661 { 00:24:59.661 "name": null, 00:24:59.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.661 "is_configured": false, 00:24:59.661 "data_offset": 0, 00:24:59.661 "data_size": 7936 00:24:59.661 }, 00:24:59.661 { 00:24:59.661 "name": "BaseBdev2", 00:24:59.661 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:24:59.661 "is_configured": true, 00:24:59.661 "data_offset": 256, 00:24:59.661 "data_size": 7936 00:24:59.661 } 00:24:59.661 ] 00:24:59.661 }' 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:59.661 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:00.230 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:00.230 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:00.230 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:00.230 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:00.230 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:00.230 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.230 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:00.230 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.230 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:00.230 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.230 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:00.230 "name": "raid_bdev1", 00:25:00.230 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:25:00.230 "strip_size_kb": 0, 00:25:00.230 "state": "online", 00:25:00.230 "raid_level": "raid1", 00:25:00.230 "superblock": true, 00:25:00.230 "num_base_bdevs": 2, 00:25:00.230 "num_base_bdevs_discovered": 1, 00:25:00.230 "num_base_bdevs_operational": 1, 00:25:00.230 "base_bdevs_list": [ 00:25:00.230 { 00:25:00.230 "name": null, 00:25:00.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.230 "is_configured": false, 00:25:00.230 "data_offset": 0, 00:25:00.230 "data_size": 7936 00:25:00.230 }, 00:25:00.230 { 00:25:00.230 "name": "BaseBdev2", 00:25:00.230 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:25:00.230 "is_configured": true, 00:25:00.230 "data_offset": 256, 00:25:00.230 "data_size": 7936 00:25:00.230 } 00:25:00.230 ] 00:25:00.230 }' 00:25:00.230 09:24:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:00.230 09:24:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:00.230 09:24:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:00.230 09:24:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:00.230 09:24:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:25:00.230 09:24:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.230 09:24:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:00.230 09:24:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.230 09:24:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:00.230 09:24:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.230 09:24:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:00.230 [2024-10-15 09:24:44.121170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:00.230 [2024-10-15 09:24:44.121250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:00.230 [2024-10-15 09:24:44.121290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:00.230 [2024-10-15 09:24:44.121307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:00.230 [2024-10-15 09:24:44.121614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:00.230 [2024-10-15 09:24:44.121648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:00.230 [2024-10-15 09:24:44.121724] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:00.230 [2024-10-15 09:24:44.121746] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:00.230 [2024-10-15 09:24:44.121767] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:00.230 [2024-10-15 09:24:44.121781] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:25:00.230 BaseBdev1 00:25:00.230 09:24:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.230 09:24:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:01.605 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:01.605 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:01.605 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:01.605 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:01.605 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:01.606 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:01.606 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:01.606 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:01.606 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:01.606 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:01.606 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.606 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.606 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:01.606 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.606 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.606 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:01.606 "name": "raid_bdev1", 00:25:01.606 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:25:01.606 "strip_size_kb": 0, 00:25:01.606 "state": "online", 00:25:01.606 "raid_level": "raid1", 00:25:01.606 "superblock": true, 00:25:01.606 "num_base_bdevs": 2, 00:25:01.606 "num_base_bdevs_discovered": 1, 00:25:01.606 "num_base_bdevs_operational": 1, 00:25:01.606 "base_bdevs_list": [ 00:25:01.606 { 00:25:01.606 "name": null, 00:25:01.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.606 "is_configured": false, 00:25:01.606 "data_offset": 0, 00:25:01.606 "data_size": 7936 00:25:01.606 }, 00:25:01.606 { 00:25:01.606 "name": "BaseBdev2", 00:25:01.606 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:25:01.606 "is_configured": true, 00:25:01.606 "data_offset": 256, 00:25:01.606 "data_size": 7936 00:25:01.606 } 00:25:01.606 ] 00:25:01.606 }' 00:25:01.606 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:01.606 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:01.864 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:01.864 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:01.864 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:01.864 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:01.864 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:01.864 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.864 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.864 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:01.864 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.864 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.864 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:01.864 "name": "raid_bdev1", 00:25:01.864 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:25:01.864 "strip_size_kb": 0, 00:25:01.864 "state": "online", 00:25:01.864 "raid_level": "raid1", 00:25:01.864 "superblock": true, 00:25:01.864 "num_base_bdevs": 2, 00:25:01.864 "num_base_bdevs_discovered": 1, 00:25:01.864 "num_base_bdevs_operational": 1, 00:25:01.864 "base_bdevs_list": [ 00:25:01.864 { 00:25:01.864 "name": null, 00:25:01.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.864 "is_configured": false, 00:25:01.864 "data_offset": 0, 00:25:01.864 "data_size": 7936 00:25:01.864 }, 00:25:01.864 { 00:25:01.864 "name": "BaseBdev2", 00:25:01.864 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:25:01.864 "is_configured": true, 00:25:01.864 "data_offset": 256, 00:25:01.864 "data_size": 7936 00:25:01.864 } 00:25:01.864 ] 00:25:01.864 }' 00:25:01.864 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:01.864 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:01.864 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:02.140 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:02.140 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:02.140 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:25:02.140 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:02.140 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:02.140 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:02.140 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:02.140 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:02.140 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:02.140 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.140 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.140 [2024-10-15 09:24:45.837914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:02.140 [2024-10-15 09:24:45.838199] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:02.140 [2024-10-15 09:24:45.838226] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:02.140 request: 00:25:02.140 { 00:25:02.140 "base_bdev": "BaseBdev1", 00:25:02.140 "raid_bdev": "raid_bdev1", 00:25:02.140 "method": "bdev_raid_add_base_bdev", 00:25:02.140 "req_id": 1 00:25:02.140 } 00:25:02.140 Got JSON-RPC error response 00:25:02.140 response: 00:25:02.140 { 00:25:02.140 "code": -22, 00:25:02.140 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:02.140 } 00:25:02.140 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:02.140 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:25:02.140 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:02.140 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:02.140 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:02.140 09:24:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:03.074 09:24:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:03.074 09:24:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:03.074 09:24:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:03.074 09:24:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:03.074 09:24:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:03.074 09:24:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:03.074 09:24:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.074 09:24:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.074 09:24:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.074 09:24:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.074 09:24:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.074 09:24:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.074 09:24:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.074 09:24:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.074 09:24:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.074 09:24:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.074 "name": "raid_bdev1", 00:25:03.074 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:25:03.074 "strip_size_kb": 0, 00:25:03.074 "state": "online", 00:25:03.074 "raid_level": "raid1", 00:25:03.074 "superblock": true, 00:25:03.074 "num_base_bdevs": 2, 00:25:03.074 "num_base_bdevs_discovered": 1, 00:25:03.074 "num_base_bdevs_operational": 1, 00:25:03.074 "base_bdevs_list": [ 00:25:03.074 { 00:25:03.074 "name": null, 00:25:03.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.074 "is_configured": false, 00:25:03.074 "data_offset": 0, 00:25:03.074 "data_size": 7936 00:25:03.074 }, 00:25:03.074 { 00:25:03.074 "name": "BaseBdev2", 00:25:03.074 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:25:03.074 "is_configured": true, 00:25:03.074 "data_offset": 256, 00:25:03.074 "data_size": 7936 00:25:03.074 } 00:25:03.074 ] 00:25:03.074 }' 00:25:03.074 09:24:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.074 09:24:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.640 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:03.640 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:03.640 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:03.640 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:03.640 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:03.640 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.640 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.640 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.640 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.640 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.640 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:03.640 "name": "raid_bdev1", 00:25:03.640 "uuid": "e3136e46-b4e7-432b-9d69-ae52097d6bdf", 00:25:03.640 "strip_size_kb": 0, 00:25:03.640 "state": "online", 00:25:03.640 "raid_level": "raid1", 00:25:03.640 "superblock": true, 00:25:03.640 "num_base_bdevs": 2, 00:25:03.640 "num_base_bdevs_discovered": 1, 00:25:03.640 "num_base_bdevs_operational": 1, 00:25:03.640 "base_bdevs_list": [ 00:25:03.640 { 00:25:03.640 "name": null, 00:25:03.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.640 "is_configured": false, 00:25:03.640 "data_offset": 0, 00:25:03.640 "data_size": 7936 00:25:03.640 }, 00:25:03.640 { 00:25:03.640 "name": "BaseBdev2", 00:25:03.640 "uuid": "e35e21b5-fb25-5967-b16a-615f2fe11182", 00:25:03.640 "is_configured": true, 00:25:03.640 "data_offset": 256, 00:25:03.641 "data_size": 7936 00:25:03.641 } 00:25:03.641 ] 00:25:03.641 }' 00:25:03.641 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:03.641 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:03.641 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:03.641 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:03.641 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88534 00:25:03.641 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 88534 ']' 00:25:03.641 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 88534 00:25:03.641 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:25:03.641 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:03.641 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88534 00:25:03.641 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:03.641 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:03.641 killing process with pid 88534 00:25:03.641 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88534' 00:25:03.641 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 88534 00:25:03.641 Received shutdown signal, test time was about 60.000000 seconds 00:25:03.641 00:25:03.641 Latency(us) 00:25:03.641 [2024-10-15T09:24:47.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.641 [2024-10-15T09:24:47.569Z] =================================================================================================================== 00:25:03.641 [2024-10-15T09:24:47.569Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:03.641 [2024-10-15 09:24:47.553619] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:03.641 09:24:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 88534 00:25:03.641 [2024-10-15 09:24:47.553800] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:03.641 [2024-10-15 09:24:47.553882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:03.641 [2024-10-15 09:24:47.553902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:04.220 [2024-10-15 09:24:47.868234] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:05.157 09:24:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:25:05.157 00:25:05.157 real 0m22.016s 00:25:05.157 user 0m29.967s 00:25:05.157 sys 0m2.615s 00:25:05.157 09:24:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:05.157 09:24:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.157 ************************************ 00:25:05.157 END TEST raid_rebuild_test_sb_md_separate 00:25:05.157 ************************************ 00:25:05.157 09:24:49 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:25:05.157 09:24:49 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:25:05.157 09:24:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:05.157 09:24:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:05.157 09:24:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:05.157 ************************************ 00:25:05.157 START TEST raid_state_function_test_sb_md_interleaved 00:25:05.157 ************************************ 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89241 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89241' 00:25:05.157 Process raid pid: 89241 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89241 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89241 ']' 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:05.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:05.157 09:24:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:05.425 [2024-10-15 09:24:49.165281] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:25:05.425 [2024-10-15 09:24:49.166162] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.719 [2024-10-15 09:24:49.347476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.719 [2024-10-15 09:24:49.495770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.977 [2024-10-15 09:24:49.726626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:05.977 [2024-10-15 09:24:49.726706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:06.235 [2024-10-15 09:24:50.142830] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:06.235 [2024-10-15 09:24:50.142915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:06.235 [2024-10-15 09:24:50.142945] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:06.235 [2024-10-15 09:24:50.142976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:06.235 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:06.493 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.493 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:06.493 "name": "Existed_Raid", 00:25:06.493 "uuid": "8ab0540b-7caa-421a-94e6-f6da544e37eb", 00:25:06.493 "strip_size_kb": 0, 00:25:06.493 "state": "configuring", 00:25:06.493 "raid_level": "raid1", 00:25:06.493 "superblock": true, 00:25:06.493 "num_base_bdevs": 2, 00:25:06.493 "num_base_bdevs_discovered": 0, 00:25:06.493 "num_base_bdevs_operational": 2, 00:25:06.493 "base_bdevs_list": [ 00:25:06.493 { 00:25:06.493 "name": "BaseBdev1", 00:25:06.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.493 "is_configured": false, 00:25:06.493 "data_offset": 0, 00:25:06.493 "data_size": 0 00:25:06.493 }, 00:25:06.493 { 00:25:06.493 "name": "BaseBdev2", 00:25:06.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.493 "is_configured": false, 00:25:06.494 "data_offset": 0, 00:25:06.494 "data_size": 0 00:25:06.494 } 00:25:06.494 ] 00:25:06.494 }' 00:25:06.494 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:06.494 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:06.752 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:06.752 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.752 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:06.752 [2024-10-15 09:24:50.678877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:06.752 [2024-10-15 09:24:50.678946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:07.010 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.010 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:07.010 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.010 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.010 [2024-10-15 09:24:50.690933] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:07.010 [2024-10-15 09:24:50.691005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:07.011 [2024-10-15 09:24:50.691031] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:07.011 [2024-10-15 09:24:50.691066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.011 [2024-10-15 09:24:50.739396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:07.011 BaseBdev1 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.011 [ 00:25:07.011 { 00:25:07.011 "name": "BaseBdev1", 00:25:07.011 "aliases": [ 00:25:07.011 "55a770d4-c3c5-4957-96c1-1e7bcfdae401" 00:25:07.011 ], 00:25:07.011 "product_name": "Malloc disk", 00:25:07.011 "block_size": 4128, 00:25:07.011 "num_blocks": 8192, 00:25:07.011 "uuid": "55a770d4-c3c5-4957-96c1-1e7bcfdae401", 00:25:07.011 "md_size": 32, 00:25:07.011 "md_interleave": true, 00:25:07.011 "dif_type": 0, 00:25:07.011 "assigned_rate_limits": { 00:25:07.011 "rw_ios_per_sec": 0, 00:25:07.011 "rw_mbytes_per_sec": 0, 00:25:07.011 "r_mbytes_per_sec": 0, 00:25:07.011 "w_mbytes_per_sec": 0 00:25:07.011 }, 00:25:07.011 "claimed": true, 00:25:07.011 "claim_type": "exclusive_write", 00:25:07.011 "zoned": false, 00:25:07.011 "supported_io_types": { 00:25:07.011 "read": true, 00:25:07.011 "write": true, 00:25:07.011 "unmap": true, 00:25:07.011 "flush": true, 00:25:07.011 "reset": true, 00:25:07.011 "nvme_admin": false, 00:25:07.011 "nvme_io": false, 00:25:07.011 "nvme_io_md": false, 00:25:07.011 "write_zeroes": true, 00:25:07.011 "zcopy": true, 00:25:07.011 "get_zone_info": false, 00:25:07.011 "zone_management": false, 00:25:07.011 "zone_append": false, 00:25:07.011 "compare": false, 00:25:07.011 "compare_and_write": false, 00:25:07.011 "abort": true, 00:25:07.011 "seek_hole": false, 00:25:07.011 "seek_data": false, 00:25:07.011 "copy": true, 00:25:07.011 "nvme_iov_md": false 00:25:07.011 }, 00:25:07.011 "memory_domains": [ 00:25:07.011 { 00:25:07.011 "dma_device_id": "system", 00:25:07.011 "dma_device_type": 1 00:25:07.011 }, 00:25:07.011 { 00:25:07.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.011 "dma_device_type": 2 00:25:07.011 } 00:25:07.011 ], 00:25:07.011 "driver_specific": {} 00:25:07.011 } 00:25:07.011 ] 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:07.011 "name": "Existed_Raid", 00:25:07.011 "uuid": "f3235896-5295-4f97-ab99-5a7caf487ebd", 00:25:07.011 "strip_size_kb": 0, 00:25:07.011 "state": "configuring", 00:25:07.011 "raid_level": "raid1", 00:25:07.011 "superblock": true, 00:25:07.011 "num_base_bdevs": 2, 00:25:07.011 "num_base_bdevs_discovered": 1, 00:25:07.011 "num_base_bdevs_operational": 2, 00:25:07.011 "base_bdevs_list": [ 00:25:07.011 { 00:25:07.011 "name": "BaseBdev1", 00:25:07.011 "uuid": "55a770d4-c3c5-4957-96c1-1e7bcfdae401", 00:25:07.011 "is_configured": true, 00:25:07.011 "data_offset": 256, 00:25:07.011 "data_size": 7936 00:25:07.011 }, 00:25:07.011 { 00:25:07.011 "name": "BaseBdev2", 00:25:07.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.011 "is_configured": false, 00:25:07.011 "data_offset": 0, 00:25:07.011 "data_size": 0 00:25:07.011 } 00:25:07.011 ] 00:25:07.011 }' 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:07.011 09:24:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.578 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:07.578 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.578 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.578 [2024-10-15 09:24:51.283648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:07.578 [2024-10-15 09:24:51.283725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:07.578 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.578 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:07.578 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.578 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.578 [2024-10-15 09:24:51.291690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:07.578 [2024-10-15 09:24:51.294320] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:07.578 [2024-10-15 09:24:51.294377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:07.578 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:07.579 "name": "Existed_Raid", 00:25:07.579 "uuid": "22cf8757-d17c-413e-81cb-bd09453ee2df", 00:25:07.579 "strip_size_kb": 0, 00:25:07.579 "state": "configuring", 00:25:07.579 "raid_level": "raid1", 00:25:07.579 "superblock": true, 00:25:07.579 "num_base_bdevs": 2, 00:25:07.579 "num_base_bdevs_discovered": 1, 00:25:07.579 "num_base_bdevs_operational": 2, 00:25:07.579 "base_bdevs_list": [ 00:25:07.579 { 00:25:07.579 "name": "BaseBdev1", 00:25:07.579 "uuid": "55a770d4-c3c5-4957-96c1-1e7bcfdae401", 00:25:07.579 "is_configured": true, 00:25:07.579 "data_offset": 256, 00:25:07.579 "data_size": 7936 00:25:07.579 }, 00:25:07.579 { 00:25:07.579 "name": "BaseBdev2", 00:25:07.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.579 "is_configured": false, 00:25:07.579 "data_offset": 0, 00:25:07.579 "data_size": 0 00:25:07.579 } 00:25:07.579 ] 00:25:07.579 }' 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:07.579 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:08.147 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:25:08.147 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.147 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:08.147 [2024-10-15 09:24:51.838735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:08.147 [2024-10-15 09:24:51.839039] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:08.147 [2024-10-15 09:24:51.839070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:08.147 [2024-10-15 09:24:51.839204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:08.147 [2024-10-15 09:24:51.839321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:08.147 [2024-10-15 09:24:51.839341] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:08.147 [2024-10-15 09:24:51.839430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:08.147 BaseBdev2 00:25:08.147 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.147 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:08.147 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:08.147 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:08.147 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:25:08.147 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:08.147 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:08.147 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:08.147 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.147 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:08.147 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.147 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:08.147 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.147 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:08.147 [ 00:25:08.147 { 00:25:08.147 "name": "BaseBdev2", 00:25:08.147 "aliases": [ 00:25:08.147 "7ef7148a-d7ff-412f-b807-6a8c6b584c3f" 00:25:08.147 ], 00:25:08.147 "product_name": "Malloc disk", 00:25:08.147 "block_size": 4128, 00:25:08.147 "num_blocks": 8192, 00:25:08.147 "uuid": "7ef7148a-d7ff-412f-b807-6a8c6b584c3f", 00:25:08.147 "md_size": 32, 00:25:08.147 "md_interleave": true, 00:25:08.147 "dif_type": 0, 00:25:08.147 "assigned_rate_limits": { 00:25:08.147 "rw_ios_per_sec": 0, 00:25:08.147 "rw_mbytes_per_sec": 0, 00:25:08.147 "r_mbytes_per_sec": 0, 00:25:08.147 "w_mbytes_per_sec": 0 00:25:08.147 }, 00:25:08.147 "claimed": true, 00:25:08.147 "claim_type": "exclusive_write", 00:25:08.147 "zoned": false, 00:25:08.147 "supported_io_types": { 00:25:08.147 "read": true, 00:25:08.147 "write": true, 00:25:08.147 "unmap": true, 00:25:08.147 "flush": true, 00:25:08.147 "reset": true, 00:25:08.147 "nvme_admin": false, 00:25:08.147 "nvme_io": false, 00:25:08.147 "nvme_io_md": false, 00:25:08.147 "write_zeroes": true, 00:25:08.147 "zcopy": true, 00:25:08.147 "get_zone_info": false, 00:25:08.147 "zone_management": false, 00:25:08.147 "zone_append": false, 00:25:08.147 "compare": false, 00:25:08.147 "compare_and_write": false, 00:25:08.147 "abort": true, 00:25:08.147 "seek_hole": false, 00:25:08.147 "seek_data": false, 00:25:08.147 "copy": true, 00:25:08.147 "nvme_iov_md": false 00:25:08.147 }, 00:25:08.147 "memory_domains": [ 00:25:08.147 { 00:25:08.147 "dma_device_id": "system", 00:25:08.147 "dma_device_type": 1 00:25:08.147 }, 00:25:08.147 { 00:25:08.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.148 "dma_device_type": 2 00:25:08.148 } 00:25:08.148 ], 00:25:08.148 "driver_specific": {} 00:25:08.148 } 00:25:08.148 ] 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:08.148 "name": "Existed_Raid", 00:25:08.148 "uuid": "22cf8757-d17c-413e-81cb-bd09453ee2df", 00:25:08.148 "strip_size_kb": 0, 00:25:08.148 "state": "online", 00:25:08.148 "raid_level": "raid1", 00:25:08.148 "superblock": true, 00:25:08.148 "num_base_bdevs": 2, 00:25:08.148 "num_base_bdevs_discovered": 2, 00:25:08.148 "num_base_bdevs_operational": 2, 00:25:08.148 "base_bdevs_list": [ 00:25:08.148 { 00:25:08.148 "name": "BaseBdev1", 00:25:08.148 "uuid": "55a770d4-c3c5-4957-96c1-1e7bcfdae401", 00:25:08.148 "is_configured": true, 00:25:08.148 "data_offset": 256, 00:25:08.148 "data_size": 7936 00:25:08.148 }, 00:25:08.148 { 00:25:08.148 "name": "BaseBdev2", 00:25:08.148 "uuid": "7ef7148a-d7ff-412f-b807-6a8c6b584c3f", 00:25:08.148 "is_configured": true, 00:25:08.148 "data_offset": 256, 00:25:08.148 "data_size": 7936 00:25:08.148 } 00:25:08.148 ] 00:25:08.148 }' 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:08.148 09:24:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:08.716 [2024-10-15 09:24:52.383404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:08.716 "name": "Existed_Raid", 00:25:08.716 "aliases": [ 00:25:08.716 "22cf8757-d17c-413e-81cb-bd09453ee2df" 00:25:08.716 ], 00:25:08.716 "product_name": "Raid Volume", 00:25:08.716 "block_size": 4128, 00:25:08.716 "num_blocks": 7936, 00:25:08.716 "uuid": "22cf8757-d17c-413e-81cb-bd09453ee2df", 00:25:08.716 "md_size": 32, 00:25:08.716 "md_interleave": true, 00:25:08.716 "dif_type": 0, 00:25:08.716 "assigned_rate_limits": { 00:25:08.716 "rw_ios_per_sec": 0, 00:25:08.716 "rw_mbytes_per_sec": 0, 00:25:08.716 "r_mbytes_per_sec": 0, 00:25:08.716 "w_mbytes_per_sec": 0 00:25:08.716 }, 00:25:08.716 "claimed": false, 00:25:08.716 "zoned": false, 00:25:08.716 "supported_io_types": { 00:25:08.716 "read": true, 00:25:08.716 "write": true, 00:25:08.716 "unmap": false, 00:25:08.716 "flush": false, 00:25:08.716 "reset": true, 00:25:08.716 "nvme_admin": false, 00:25:08.716 "nvme_io": false, 00:25:08.716 "nvme_io_md": false, 00:25:08.716 "write_zeroes": true, 00:25:08.716 "zcopy": false, 00:25:08.716 "get_zone_info": false, 00:25:08.716 "zone_management": false, 00:25:08.716 "zone_append": false, 00:25:08.716 "compare": false, 00:25:08.716 "compare_and_write": false, 00:25:08.716 "abort": false, 00:25:08.716 "seek_hole": false, 00:25:08.716 "seek_data": false, 00:25:08.716 "copy": false, 00:25:08.716 "nvme_iov_md": false 00:25:08.716 }, 00:25:08.716 "memory_domains": [ 00:25:08.716 { 00:25:08.716 "dma_device_id": "system", 00:25:08.716 "dma_device_type": 1 00:25:08.716 }, 00:25:08.716 { 00:25:08.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.716 "dma_device_type": 2 00:25:08.716 }, 00:25:08.716 { 00:25:08.716 "dma_device_id": "system", 00:25:08.716 "dma_device_type": 1 00:25:08.716 }, 00:25:08.716 { 00:25:08.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.716 "dma_device_type": 2 00:25:08.716 } 00:25:08.716 ], 00:25:08.716 "driver_specific": { 00:25:08.716 "raid": { 00:25:08.716 "uuid": "22cf8757-d17c-413e-81cb-bd09453ee2df", 00:25:08.716 "strip_size_kb": 0, 00:25:08.716 "state": "online", 00:25:08.716 "raid_level": "raid1", 00:25:08.716 "superblock": true, 00:25:08.716 "num_base_bdevs": 2, 00:25:08.716 "num_base_bdevs_discovered": 2, 00:25:08.716 "num_base_bdevs_operational": 2, 00:25:08.716 "base_bdevs_list": [ 00:25:08.716 { 00:25:08.716 "name": "BaseBdev1", 00:25:08.716 "uuid": "55a770d4-c3c5-4957-96c1-1e7bcfdae401", 00:25:08.716 "is_configured": true, 00:25:08.716 "data_offset": 256, 00:25:08.716 "data_size": 7936 00:25:08.716 }, 00:25:08.716 { 00:25:08.716 "name": "BaseBdev2", 00:25:08.716 "uuid": "7ef7148a-d7ff-412f-b807-6a8c6b584c3f", 00:25:08.716 "is_configured": true, 00:25:08.716 "data_offset": 256, 00:25:08.716 "data_size": 7936 00:25:08.716 } 00:25:08.716 ] 00:25:08.716 } 00:25:08.716 } 00:25:08.716 }' 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:08.716 BaseBdev2' 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.716 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:08.716 [2024-10-15 09:24:52.615093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:08.977 "name": "Existed_Raid", 00:25:08.977 "uuid": "22cf8757-d17c-413e-81cb-bd09453ee2df", 00:25:08.977 "strip_size_kb": 0, 00:25:08.977 "state": "online", 00:25:08.977 "raid_level": "raid1", 00:25:08.977 "superblock": true, 00:25:08.977 "num_base_bdevs": 2, 00:25:08.977 "num_base_bdevs_discovered": 1, 00:25:08.977 "num_base_bdevs_operational": 1, 00:25:08.977 "base_bdevs_list": [ 00:25:08.977 { 00:25:08.977 "name": null, 00:25:08.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.977 "is_configured": false, 00:25:08.977 "data_offset": 0, 00:25:08.977 "data_size": 7936 00:25:08.977 }, 00:25:08.977 { 00:25:08.977 "name": "BaseBdev2", 00:25:08.977 "uuid": "7ef7148a-d7ff-412f-b807-6a8c6b584c3f", 00:25:08.977 "is_configured": true, 00:25:08.977 "data_offset": 256, 00:25:08.977 "data_size": 7936 00:25:08.977 } 00:25:08.977 ] 00:25:08.977 }' 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:08.977 09:24:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:09.544 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:09.544 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:09.544 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.544 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.544 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:09.545 [2024-10-15 09:24:53.303256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:09.545 [2024-10-15 09:24:53.303416] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:09.545 [2024-10-15 09:24:53.398701] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:09.545 [2024-10-15 09:24:53.398784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:09.545 [2024-10-15 09:24:53.398805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89241 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89241 ']' 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89241 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:09.545 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89241 00:25:09.803 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:09.803 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:09.803 killing process with pid 89241 00:25:09.803 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89241' 00:25:09.803 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 89241 00:25:09.803 [2024-10-15 09:24:53.495196] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:09.803 09:24:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 89241 00:25:09.803 [2024-10-15 09:24:53.510857] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:10.738 09:24:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:25:10.738 00:25:10.738 real 0m5.614s 00:25:10.738 user 0m8.344s 00:25:10.738 sys 0m0.869s 00:25:10.738 09:24:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:10.738 09:24:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:10.738 ************************************ 00:25:10.738 END TEST raid_state_function_test_sb_md_interleaved 00:25:10.738 ************************************ 00:25:10.996 09:24:54 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:25:10.996 09:24:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:25:10.996 09:24:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:10.996 09:24:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:10.996 ************************************ 00:25:10.996 START TEST raid_superblock_test_md_interleaved 00:25:10.996 ************************************ 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89493 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89493 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89493 ']' 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:10.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:10.997 09:24:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:10.997 [2024-10-15 09:24:54.824527] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:25:10.997 [2024-10-15 09:24:54.824728] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89493 ] 00:25:11.255 [2024-10-15 09:24:55.004288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.255 [2024-10-15 09:24:55.150474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.513 [2024-10-15 09:24:55.374151] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:11.513 [2024-10-15 09:24:55.374218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.080 malloc1 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.080 [2024-10-15 09:24:55.858067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:12.080 [2024-10-15 09:24:55.858346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:12.080 [2024-10-15 09:24:55.858545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:12.080 [2024-10-15 09:24:55.858673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:12.080 [2024-10-15 09:24:55.861458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:12.080 [2024-10-15 09:24:55.861618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:12.080 pt1 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.080 malloc2 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.080 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.080 [2024-10-15 09:24:55.917359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:12.080 [2024-10-15 09:24:55.917597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:12.080 [2024-10-15 09:24:55.917642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:12.081 [2024-10-15 09:24:55.917659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:12.081 [2024-10-15 09:24:55.920274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:12.081 [2024-10-15 09:24:55.920316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:12.081 pt2 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.081 [2024-10-15 09:24:55.929427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:12.081 [2024-10-15 09:24:55.932069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:12.081 [2024-10-15 09:24:55.932398] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:12.081 [2024-10-15 09:24:55.932419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:12.081 [2024-10-15 09:24:55.932531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:12.081 [2024-10-15 09:24:55.932651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:12.081 [2024-10-15 09:24:55.932671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:12.081 [2024-10-15 09:24:55.932776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:12.081 "name": "raid_bdev1", 00:25:12.081 "uuid": "d85e6fcc-d67e-4601-97de-97235a2252e3", 00:25:12.081 "strip_size_kb": 0, 00:25:12.081 "state": "online", 00:25:12.081 "raid_level": "raid1", 00:25:12.081 "superblock": true, 00:25:12.081 "num_base_bdevs": 2, 00:25:12.081 "num_base_bdevs_discovered": 2, 00:25:12.081 "num_base_bdevs_operational": 2, 00:25:12.081 "base_bdevs_list": [ 00:25:12.081 { 00:25:12.081 "name": "pt1", 00:25:12.081 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:12.081 "is_configured": true, 00:25:12.081 "data_offset": 256, 00:25:12.081 "data_size": 7936 00:25:12.081 }, 00:25:12.081 { 00:25:12.081 "name": "pt2", 00:25:12.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:12.081 "is_configured": true, 00:25:12.081 "data_offset": 256, 00:25:12.081 "data_size": 7936 00:25:12.081 } 00:25:12.081 ] 00:25:12.081 }' 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:12.081 09:24:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.648 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:12.648 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:12.648 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:12.648 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:12.648 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:25:12.648 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:12.648 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:12.648 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:12.648 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.648 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.648 [2024-10-15 09:24:56.433912] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:12.648 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.648 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:12.648 "name": "raid_bdev1", 00:25:12.648 "aliases": [ 00:25:12.648 "d85e6fcc-d67e-4601-97de-97235a2252e3" 00:25:12.648 ], 00:25:12.648 "product_name": "Raid Volume", 00:25:12.648 "block_size": 4128, 00:25:12.648 "num_blocks": 7936, 00:25:12.648 "uuid": "d85e6fcc-d67e-4601-97de-97235a2252e3", 00:25:12.648 "md_size": 32, 00:25:12.648 "md_interleave": true, 00:25:12.648 "dif_type": 0, 00:25:12.648 "assigned_rate_limits": { 00:25:12.648 "rw_ios_per_sec": 0, 00:25:12.648 "rw_mbytes_per_sec": 0, 00:25:12.648 "r_mbytes_per_sec": 0, 00:25:12.648 "w_mbytes_per_sec": 0 00:25:12.648 }, 00:25:12.648 "claimed": false, 00:25:12.648 "zoned": false, 00:25:12.648 "supported_io_types": { 00:25:12.648 "read": true, 00:25:12.648 "write": true, 00:25:12.648 "unmap": false, 00:25:12.648 "flush": false, 00:25:12.648 "reset": true, 00:25:12.648 "nvme_admin": false, 00:25:12.648 "nvme_io": false, 00:25:12.648 "nvme_io_md": false, 00:25:12.648 "write_zeroes": true, 00:25:12.648 "zcopy": false, 00:25:12.648 "get_zone_info": false, 00:25:12.648 "zone_management": false, 00:25:12.648 "zone_append": false, 00:25:12.648 "compare": false, 00:25:12.648 "compare_and_write": false, 00:25:12.648 "abort": false, 00:25:12.648 "seek_hole": false, 00:25:12.648 "seek_data": false, 00:25:12.648 "copy": false, 00:25:12.648 "nvme_iov_md": false 00:25:12.648 }, 00:25:12.648 "memory_domains": [ 00:25:12.648 { 00:25:12.648 "dma_device_id": "system", 00:25:12.648 "dma_device_type": 1 00:25:12.648 }, 00:25:12.648 { 00:25:12.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.648 "dma_device_type": 2 00:25:12.648 }, 00:25:12.648 { 00:25:12.648 "dma_device_id": "system", 00:25:12.648 "dma_device_type": 1 00:25:12.648 }, 00:25:12.648 { 00:25:12.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.648 "dma_device_type": 2 00:25:12.648 } 00:25:12.648 ], 00:25:12.648 "driver_specific": { 00:25:12.648 "raid": { 00:25:12.648 "uuid": "d85e6fcc-d67e-4601-97de-97235a2252e3", 00:25:12.648 "strip_size_kb": 0, 00:25:12.648 "state": "online", 00:25:12.648 "raid_level": "raid1", 00:25:12.648 "superblock": true, 00:25:12.648 "num_base_bdevs": 2, 00:25:12.648 "num_base_bdevs_discovered": 2, 00:25:12.648 "num_base_bdevs_operational": 2, 00:25:12.648 "base_bdevs_list": [ 00:25:12.648 { 00:25:12.648 "name": "pt1", 00:25:12.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:12.648 "is_configured": true, 00:25:12.648 "data_offset": 256, 00:25:12.648 "data_size": 7936 00:25:12.648 }, 00:25:12.648 { 00:25:12.648 "name": "pt2", 00:25:12.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:12.648 "is_configured": true, 00:25:12.648 "data_offset": 256, 00:25:12.648 "data_size": 7936 00:25:12.648 } 00:25:12.648 ] 00:25:12.648 } 00:25:12.648 } 00:25:12.648 }' 00:25:12.648 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:12.648 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:12.648 pt2' 00:25:12.648 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:12.907 [2024-10-15 09:24:56.698116] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d85e6fcc-d67e-4601-97de-97235a2252e3 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z d85e6fcc-d67e-4601-97de-97235a2252e3 ']' 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.907 [2024-10-15 09:24:56.749574] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:12.907 [2024-10-15 09:24:56.749610] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:12.907 [2024-10-15 09:24:56.749732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:12.907 [2024-10-15 09:24:56.749816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:12.907 [2024-10-15 09:24:56.749837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.907 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.166 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.166 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:13.166 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:13.166 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:25:13.166 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:13.166 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:13.166 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:13.166 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:13.166 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:13.166 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:13.166 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.166 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.166 [2024-10-15 09:24:56.893682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:13.166 [2024-10-15 09:24:56.896628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:13.166 [2024-10-15 09:24:56.896742] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:13.166 [2024-10-15 09:24:56.896830] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:13.166 [2024-10-15 09:24:56.896858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:13.166 [2024-10-15 09:24:56.896874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:13.166 request: 00:25:13.166 { 00:25:13.166 "name": "raid_bdev1", 00:25:13.166 "raid_level": "raid1", 00:25:13.166 "base_bdevs": [ 00:25:13.166 "malloc1", 00:25:13.166 "malloc2" 00:25:13.166 ], 00:25:13.166 "superblock": false, 00:25:13.166 "method": "bdev_raid_create", 00:25:13.166 "req_id": 1 00:25:13.166 } 00:25:13.166 Got JSON-RPC error response 00:25:13.166 response: 00:25:13.166 { 00:25:13.166 "code": -17, 00:25:13.166 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:13.166 } 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.167 [2024-10-15 09:24:56.965741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:13.167 [2024-10-15 09:24:56.965981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.167 [2024-10-15 09:24:56.966134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:13.167 [2024-10-15 09:24:56.966302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.167 [2024-10-15 09:24:56.969092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.167 [2024-10-15 09:24:56.969295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:13.167 [2024-10-15 09:24:56.969463] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:13.167 [2024-10-15 09:24:56.969655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:13.167 pt1 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.167 09:24:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.167 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:13.167 "name": "raid_bdev1", 00:25:13.167 "uuid": "d85e6fcc-d67e-4601-97de-97235a2252e3", 00:25:13.167 "strip_size_kb": 0, 00:25:13.167 "state": "configuring", 00:25:13.167 "raid_level": "raid1", 00:25:13.167 "superblock": true, 00:25:13.167 "num_base_bdevs": 2, 00:25:13.167 "num_base_bdevs_discovered": 1, 00:25:13.167 "num_base_bdevs_operational": 2, 00:25:13.167 "base_bdevs_list": [ 00:25:13.167 { 00:25:13.167 "name": "pt1", 00:25:13.167 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:13.167 "is_configured": true, 00:25:13.167 "data_offset": 256, 00:25:13.167 "data_size": 7936 00:25:13.167 }, 00:25:13.167 { 00:25:13.167 "name": null, 00:25:13.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:13.167 "is_configured": false, 00:25:13.167 "data_offset": 256, 00:25:13.167 "data_size": 7936 00:25:13.167 } 00:25:13.167 ] 00:25:13.167 }' 00:25:13.167 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:13.167 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.733 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:25:13.733 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:13.733 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:13.733 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:13.733 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.733 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.733 [2024-10-15 09:24:57.506115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:13.733 [2024-10-15 09:24:57.506242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.733 [2024-10-15 09:24:57.506280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:13.733 [2024-10-15 09:24:57.506300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.733 [2024-10-15 09:24:57.506557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.733 [2024-10-15 09:24:57.506590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:13.734 [2024-10-15 09:24:57.506666] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:13.734 [2024-10-15 09:24:57.506712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:13.734 [2024-10-15 09:24:57.506858] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:13.734 [2024-10-15 09:24:57.506880] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:13.734 [2024-10-15 09:24:57.506980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:13.734 [2024-10-15 09:24:57.507075] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:13.734 [2024-10-15 09:24:57.507096] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:13.734 [2024-10-15 09:24:57.507204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:13.734 pt2 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:13.734 "name": "raid_bdev1", 00:25:13.734 "uuid": "d85e6fcc-d67e-4601-97de-97235a2252e3", 00:25:13.734 "strip_size_kb": 0, 00:25:13.734 "state": "online", 00:25:13.734 "raid_level": "raid1", 00:25:13.734 "superblock": true, 00:25:13.734 "num_base_bdevs": 2, 00:25:13.734 "num_base_bdevs_discovered": 2, 00:25:13.734 "num_base_bdevs_operational": 2, 00:25:13.734 "base_bdevs_list": [ 00:25:13.734 { 00:25:13.734 "name": "pt1", 00:25:13.734 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:13.734 "is_configured": true, 00:25:13.734 "data_offset": 256, 00:25:13.734 "data_size": 7936 00:25:13.734 }, 00:25:13.734 { 00:25:13.734 "name": "pt2", 00:25:13.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:13.734 "is_configured": true, 00:25:13.734 "data_offset": 256, 00:25:13.734 "data_size": 7936 00:25:13.734 } 00:25:13.734 ] 00:25:13.734 }' 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:13.734 09:24:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:14.302 [2024-10-15 09:24:58.058656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:14.302 "name": "raid_bdev1", 00:25:14.302 "aliases": [ 00:25:14.302 "d85e6fcc-d67e-4601-97de-97235a2252e3" 00:25:14.302 ], 00:25:14.302 "product_name": "Raid Volume", 00:25:14.302 "block_size": 4128, 00:25:14.302 "num_blocks": 7936, 00:25:14.302 "uuid": "d85e6fcc-d67e-4601-97de-97235a2252e3", 00:25:14.302 "md_size": 32, 00:25:14.302 "md_interleave": true, 00:25:14.302 "dif_type": 0, 00:25:14.302 "assigned_rate_limits": { 00:25:14.302 "rw_ios_per_sec": 0, 00:25:14.302 "rw_mbytes_per_sec": 0, 00:25:14.302 "r_mbytes_per_sec": 0, 00:25:14.302 "w_mbytes_per_sec": 0 00:25:14.302 }, 00:25:14.302 "claimed": false, 00:25:14.302 "zoned": false, 00:25:14.302 "supported_io_types": { 00:25:14.302 "read": true, 00:25:14.302 "write": true, 00:25:14.302 "unmap": false, 00:25:14.302 "flush": false, 00:25:14.302 "reset": true, 00:25:14.302 "nvme_admin": false, 00:25:14.302 "nvme_io": false, 00:25:14.302 "nvme_io_md": false, 00:25:14.302 "write_zeroes": true, 00:25:14.302 "zcopy": false, 00:25:14.302 "get_zone_info": false, 00:25:14.302 "zone_management": false, 00:25:14.302 "zone_append": false, 00:25:14.302 "compare": false, 00:25:14.302 "compare_and_write": false, 00:25:14.302 "abort": false, 00:25:14.302 "seek_hole": false, 00:25:14.302 "seek_data": false, 00:25:14.302 "copy": false, 00:25:14.302 "nvme_iov_md": false 00:25:14.302 }, 00:25:14.302 "memory_domains": [ 00:25:14.302 { 00:25:14.302 "dma_device_id": "system", 00:25:14.302 "dma_device_type": 1 00:25:14.302 }, 00:25:14.302 { 00:25:14.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:14.302 "dma_device_type": 2 00:25:14.302 }, 00:25:14.302 { 00:25:14.302 "dma_device_id": "system", 00:25:14.302 "dma_device_type": 1 00:25:14.302 }, 00:25:14.302 { 00:25:14.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:14.302 "dma_device_type": 2 00:25:14.302 } 00:25:14.302 ], 00:25:14.302 "driver_specific": { 00:25:14.302 "raid": { 00:25:14.302 "uuid": "d85e6fcc-d67e-4601-97de-97235a2252e3", 00:25:14.302 "strip_size_kb": 0, 00:25:14.302 "state": "online", 00:25:14.302 "raid_level": "raid1", 00:25:14.302 "superblock": true, 00:25:14.302 "num_base_bdevs": 2, 00:25:14.302 "num_base_bdevs_discovered": 2, 00:25:14.302 "num_base_bdevs_operational": 2, 00:25:14.302 "base_bdevs_list": [ 00:25:14.302 { 00:25:14.302 "name": "pt1", 00:25:14.302 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:14.302 "is_configured": true, 00:25:14.302 "data_offset": 256, 00:25:14.302 "data_size": 7936 00:25:14.302 }, 00:25:14.302 { 00:25:14.302 "name": "pt2", 00:25:14.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:14.302 "is_configured": true, 00:25:14.302 "data_offset": 256, 00:25:14.302 "data_size": 7936 00:25:14.302 } 00:25:14.302 ] 00:25:14.302 } 00:25:14.302 } 00:25:14.302 }' 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:14.302 pt2' 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.302 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:14.561 [2024-10-15 09:24:58.322731] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' d85e6fcc-d67e-4601-97de-97235a2252e3 '!=' d85e6fcc-d67e-4601-97de-97235a2252e3 ']' 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:25:14.561 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:14.562 [2024-10-15 09:24:58.374470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:14.562 "name": "raid_bdev1", 00:25:14.562 "uuid": "d85e6fcc-d67e-4601-97de-97235a2252e3", 00:25:14.562 "strip_size_kb": 0, 00:25:14.562 "state": "online", 00:25:14.562 "raid_level": "raid1", 00:25:14.562 "superblock": true, 00:25:14.562 "num_base_bdevs": 2, 00:25:14.562 "num_base_bdevs_discovered": 1, 00:25:14.562 "num_base_bdevs_operational": 1, 00:25:14.562 "base_bdevs_list": [ 00:25:14.562 { 00:25:14.562 "name": null, 00:25:14.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.562 "is_configured": false, 00:25:14.562 "data_offset": 0, 00:25:14.562 "data_size": 7936 00:25:14.562 }, 00:25:14.562 { 00:25:14.562 "name": "pt2", 00:25:14.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:14.562 "is_configured": true, 00:25:14.562 "data_offset": 256, 00:25:14.562 "data_size": 7936 00:25:14.562 } 00:25:14.562 ] 00:25:14.562 }' 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:14.562 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.129 [2024-10-15 09:24:58.918629] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:15.129 [2024-10-15 09:24:58.918668] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:15.129 [2024-10-15 09:24:58.918782] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:15.129 [2024-10-15 09:24:58.918855] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:15.129 [2024-10-15 09:24:58.918876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.129 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.129 [2024-10-15 09:24:58.994606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:15.129 [2024-10-15 09:24:58.994680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:15.129 [2024-10-15 09:24:58.994708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:15.129 [2024-10-15 09:24:58.994726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:15.129 [2024-10-15 09:24:58.997500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:15.129 [2024-10-15 09:24:58.997548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:15.129 [2024-10-15 09:24:58.997627] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:15.129 [2024-10-15 09:24:58.997698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:15.129 [2024-10-15 09:24:58.997798] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:15.129 [2024-10-15 09:24:58.997819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:15.130 [2024-10-15 09:24:58.997937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:15.130 [2024-10-15 09:24:58.998030] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:15.130 [2024-10-15 09:24:58.998044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:15.130 [2024-10-15 09:24:58.998153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:15.130 pt2 00:25:15.130 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.130 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:15.130 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:15.130 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:15.130 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:15.130 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:15.130 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:15.130 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:15.130 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:15.130 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:15.130 09:24:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:15.130 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.130 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.130 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.130 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.130 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.130 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:15.130 "name": "raid_bdev1", 00:25:15.130 "uuid": "d85e6fcc-d67e-4601-97de-97235a2252e3", 00:25:15.130 "strip_size_kb": 0, 00:25:15.130 "state": "online", 00:25:15.130 "raid_level": "raid1", 00:25:15.130 "superblock": true, 00:25:15.130 "num_base_bdevs": 2, 00:25:15.130 "num_base_bdevs_discovered": 1, 00:25:15.130 "num_base_bdevs_operational": 1, 00:25:15.130 "base_bdevs_list": [ 00:25:15.130 { 00:25:15.130 "name": null, 00:25:15.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.130 "is_configured": false, 00:25:15.130 "data_offset": 256, 00:25:15.130 "data_size": 7936 00:25:15.130 }, 00:25:15.130 { 00:25:15.130 "name": "pt2", 00:25:15.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:15.130 "is_configured": true, 00:25:15.130 "data_offset": 256, 00:25:15.130 "data_size": 7936 00:25:15.130 } 00:25:15.130 ] 00:25:15.130 }' 00:25:15.130 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:15.130 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.697 [2024-10-15 09:24:59.546748] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:15.697 [2024-10-15 09:24:59.546789] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:15.697 [2024-10-15 09:24:59.546897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:15.697 [2024-10-15 09:24:59.546977] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:15.697 [2024-10-15 09:24:59.546994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.697 [2024-10-15 09:24:59.614825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:15.697 [2024-10-15 09:24:59.614911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:15.697 [2024-10-15 09:24:59.614947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:25:15.697 [2024-10-15 09:24:59.614962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:15.697 [2024-10-15 09:24:59.617971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:15.697 [2024-10-15 09:24:59.618028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:15.697 [2024-10-15 09:24:59.618158] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:15.697 [2024-10-15 09:24:59.618225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:15.697 [2024-10-15 09:24:59.618384] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:15.697 [2024-10-15 09:24:59.618402] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:15.697 [2024-10-15 09:24:59.618453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:25:15.697 [2024-10-15 09:24:59.618528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:15.697 [2024-10-15 09:24:59.618637] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:25:15.697 [2024-10-15 09:24:59.618653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:15.697 [2024-10-15 09:24:59.618740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:15.697 [2024-10-15 09:24:59.618831] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:25:15.697 [2024-10-15 09:24:59.618850] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:25:15.697 [2024-10-15 09:24:59.618995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:15.697 pt1 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:15.697 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:15.956 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.956 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.956 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.956 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.956 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.956 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:15.956 "name": "raid_bdev1", 00:25:15.956 "uuid": "d85e6fcc-d67e-4601-97de-97235a2252e3", 00:25:15.956 "strip_size_kb": 0, 00:25:15.956 "state": "online", 00:25:15.956 "raid_level": "raid1", 00:25:15.956 "superblock": true, 00:25:15.956 "num_base_bdevs": 2, 00:25:15.956 "num_base_bdevs_discovered": 1, 00:25:15.956 "num_base_bdevs_operational": 1, 00:25:15.956 "base_bdevs_list": [ 00:25:15.956 { 00:25:15.956 "name": null, 00:25:15.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.956 "is_configured": false, 00:25:15.956 "data_offset": 256, 00:25:15.956 "data_size": 7936 00:25:15.956 }, 00:25:15.956 { 00:25:15.956 "name": "pt2", 00:25:15.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:15.956 "is_configured": true, 00:25:15.956 "data_offset": 256, 00:25:15.956 "data_size": 7936 00:25:15.956 } 00:25:15.956 ] 00:25:15.956 }' 00:25:15.956 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:15.956 09:24:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.566 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:25:16.566 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.566 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.566 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:16.566 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.566 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:25:16.566 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:16.566 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.566 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.566 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:25:16.566 [2024-10-15 09:25:00.243564] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:16.566 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.566 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' d85e6fcc-d67e-4601-97de-97235a2252e3 '!=' d85e6fcc-d67e-4601-97de-97235a2252e3 ']' 00:25:16.566 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89493 00:25:16.566 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89493 ']' 00:25:16.566 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89493 00:25:16.567 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:25:16.567 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:16.567 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89493 00:25:16.567 killing process with pid 89493 00:25:16.567 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:16.567 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:16.567 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89493' 00:25:16.567 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 89493 00:25:16.567 [2024-10-15 09:25:00.325753] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:16.567 09:25:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 89493 00:25:16.567 [2024-10-15 09:25:00.325883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:16.567 [2024-10-15 09:25:00.325955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:16.567 [2024-10-15 09:25:00.325997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:25:16.825 [2024-10-15 09:25:00.533642] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:17.761 ************************************ 00:25:17.762 END TEST raid_superblock_test_md_interleaved 00:25:17.762 ************************************ 00:25:17.762 09:25:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:25:17.762 00:25:17.762 real 0m6.945s 00:25:17.762 user 0m10.931s 00:25:17.762 sys 0m1.039s 00:25:17.762 09:25:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:17.762 09:25:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:18.021 09:25:01 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:25:18.021 09:25:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:25:18.021 09:25:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:18.021 09:25:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:18.021 ************************************ 00:25:18.021 START TEST raid_rebuild_test_sb_md_interleaved 00:25:18.021 ************************************ 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89827 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89827 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89827 ']' 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:18.021 09:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:18.021 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:18.021 Zero copy mechanism will not be used. 00:25:18.021 [2024-10-15 09:25:01.835912] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:25:18.021 [2024-10-15 09:25:01.836101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89827 ] 00:25:18.280 [2024-10-15 09:25:02.015427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.280 [2024-10-15 09:25:02.157500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.539 [2024-10-15 09:25:02.379865] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:18.539 [2024-10-15 09:25:02.379929] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.107 BaseBdev1_malloc 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.107 [2024-10-15 09:25:02.852049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:19.107 [2024-10-15 09:25:02.852172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.107 [2024-10-15 09:25:02.852207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:19.107 [2024-10-15 09:25:02.852227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.107 [2024-10-15 09:25:02.855005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.107 [2024-10-15 09:25:02.855258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:19.107 BaseBdev1 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.107 BaseBdev2_malloc 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.107 [2024-10-15 09:25:02.913302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:19.107 [2024-10-15 09:25:02.913391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.107 [2024-10-15 09:25:02.913424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:19.107 [2024-10-15 09:25:02.913442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.107 [2024-10-15 09:25:02.916296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.107 [2024-10-15 09:25:02.916494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:19.107 BaseBdev2 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.107 spare_malloc 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.107 spare_delay 00:25:19.107 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.108 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:19.108 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.108 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.108 [2024-10-15 09:25:02.992591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:19.108 [2024-10-15 09:25:02.992686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.108 [2024-10-15 09:25:02.992734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:19.108 [2024-10-15 09:25:02.992752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.108 [2024-10-15 09:25:02.995443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.108 [2024-10-15 09:25:02.995491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:19.108 spare 00:25:19.108 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.108 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:25:19.108 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.108 09:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.108 [2024-10-15 09:25:03.000632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:19.108 [2024-10-15 09:25:03.003296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:19.108 [2024-10-15 09:25:03.003568] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:19.108 [2024-10-15 09:25:03.003591] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:19.108 [2024-10-15 09:25:03.003694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:19.108 [2024-10-15 09:25:03.003827] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:19.108 [2024-10-15 09:25:03.003841] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:19.108 [2024-10-15 09:25:03.003934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:19.108 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.108 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:19.108 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:19.108 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:19.108 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:19.108 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:19.108 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:19.108 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:19.108 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:19.108 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:19.108 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:19.108 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.108 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.108 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.108 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.108 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.384 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:19.384 "name": "raid_bdev1", 00:25:19.384 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:19.384 "strip_size_kb": 0, 00:25:19.384 "state": "online", 00:25:19.384 "raid_level": "raid1", 00:25:19.384 "superblock": true, 00:25:19.384 "num_base_bdevs": 2, 00:25:19.384 "num_base_bdevs_discovered": 2, 00:25:19.384 "num_base_bdevs_operational": 2, 00:25:19.384 "base_bdevs_list": [ 00:25:19.384 { 00:25:19.384 "name": "BaseBdev1", 00:25:19.384 "uuid": "d51c7ff2-04c9-50e6-91b8-4cf69250e6b9", 00:25:19.384 "is_configured": true, 00:25:19.384 "data_offset": 256, 00:25:19.384 "data_size": 7936 00:25:19.384 }, 00:25:19.384 { 00:25:19.384 "name": "BaseBdev2", 00:25:19.384 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:19.384 "is_configured": true, 00:25:19.384 "data_offset": 256, 00:25:19.384 "data_size": 7936 00:25:19.384 } 00:25:19.384 ] 00:25:19.384 }' 00:25:19.384 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:19.384 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.643 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:19.643 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:19.643 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.643 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.643 [2024-10-15 09:25:03.557266] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.902 [2024-10-15 09:25:03.672895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.902 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:19.902 "name": "raid_bdev1", 00:25:19.902 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:19.902 "strip_size_kb": 0, 00:25:19.902 "state": "online", 00:25:19.902 "raid_level": "raid1", 00:25:19.902 "superblock": true, 00:25:19.902 "num_base_bdevs": 2, 00:25:19.902 "num_base_bdevs_discovered": 1, 00:25:19.902 "num_base_bdevs_operational": 1, 00:25:19.902 "base_bdevs_list": [ 00:25:19.902 { 00:25:19.902 "name": null, 00:25:19.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.902 "is_configured": false, 00:25:19.902 "data_offset": 0, 00:25:19.902 "data_size": 7936 00:25:19.902 }, 00:25:19.902 { 00:25:19.902 "name": "BaseBdev2", 00:25:19.902 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:19.903 "is_configured": true, 00:25:19.903 "data_offset": 256, 00:25:19.903 "data_size": 7936 00:25:19.903 } 00:25:19.903 ] 00:25:19.903 }' 00:25:19.903 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:19.903 09:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:20.472 09:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:20.472 09:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.472 09:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:20.472 [2024-10-15 09:25:04.197097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:20.472 [2024-10-15 09:25:04.215825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:20.472 09:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.472 09:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:20.472 [2024-10-15 09:25:04.218519] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:21.409 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:21.409 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:21.409 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:21.409 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:21.409 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:21.409 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.409 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.409 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:21.409 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.409 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.409 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:21.409 "name": "raid_bdev1", 00:25:21.409 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:21.409 "strip_size_kb": 0, 00:25:21.409 "state": "online", 00:25:21.409 "raid_level": "raid1", 00:25:21.409 "superblock": true, 00:25:21.409 "num_base_bdevs": 2, 00:25:21.409 "num_base_bdevs_discovered": 2, 00:25:21.409 "num_base_bdevs_operational": 2, 00:25:21.409 "process": { 00:25:21.409 "type": "rebuild", 00:25:21.409 "target": "spare", 00:25:21.409 "progress": { 00:25:21.409 "blocks": 2560, 00:25:21.409 "percent": 32 00:25:21.409 } 00:25:21.409 }, 00:25:21.409 "base_bdevs_list": [ 00:25:21.409 { 00:25:21.409 "name": "spare", 00:25:21.409 "uuid": "810d2799-8878-54ea-9789-0a8d05e19d7b", 00:25:21.409 "is_configured": true, 00:25:21.409 "data_offset": 256, 00:25:21.409 "data_size": 7936 00:25:21.409 }, 00:25:21.409 { 00:25:21.409 "name": "BaseBdev2", 00:25:21.409 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:21.409 "is_configured": true, 00:25:21.409 "data_offset": 256, 00:25:21.409 "data_size": 7936 00:25:21.409 } 00:25:21.409 ] 00:25:21.409 }' 00:25:21.409 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:21.409 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:21.668 [2024-10-15 09:25:05.396542] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:21.668 [2024-10-15 09:25:05.430162] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:21.668 [2024-10-15 09:25:05.430265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:21.668 [2024-10-15 09:25:05.430290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:21.668 [2024-10-15 09:25:05.430310] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.668 "name": "raid_bdev1", 00:25:21.668 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:21.668 "strip_size_kb": 0, 00:25:21.668 "state": "online", 00:25:21.668 "raid_level": "raid1", 00:25:21.668 "superblock": true, 00:25:21.668 "num_base_bdevs": 2, 00:25:21.668 "num_base_bdevs_discovered": 1, 00:25:21.668 "num_base_bdevs_operational": 1, 00:25:21.668 "base_bdevs_list": [ 00:25:21.668 { 00:25:21.668 "name": null, 00:25:21.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.668 "is_configured": false, 00:25:21.668 "data_offset": 0, 00:25:21.668 "data_size": 7936 00:25:21.668 }, 00:25:21.668 { 00:25:21.668 "name": "BaseBdev2", 00:25:21.668 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:21.668 "is_configured": true, 00:25:21.668 "data_offset": 256, 00:25:21.668 "data_size": 7936 00:25:21.668 } 00:25:21.668 ] 00:25:21.668 }' 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.668 09:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:22.236 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:22.236 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:22.236 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:22.236 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:22.236 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:22.236 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.236 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.236 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:22.236 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.236 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.236 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:22.236 "name": "raid_bdev1", 00:25:22.236 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:22.236 "strip_size_kb": 0, 00:25:22.236 "state": "online", 00:25:22.236 "raid_level": "raid1", 00:25:22.236 "superblock": true, 00:25:22.236 "num_base_bdevs": 2, 00:25:22.236 "num_base_bdevs_discovered": 1, 00:25:22.236 "num_base_bdevs_operational": 1, 00:25:22.236 "base_bdevs_list": [ 00:25:22.236 { 00:25:22.236 "name": null, 00:25:22.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.236 "is_configured": false, 00:25:22.236 "data_offset": 0, 00:25:22.236 "data_size": 7936 00:25:22.236 }, 00:25:22.236 { 00:25:22.236 "name": "BaseBdev2", 00:25:22.236 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:22.236 "is_configured": true, 00:25:22.236 "data_offset": 256, 00:25:22.236 "data_size": 7936 00:25:22.236 } 00:25:22.236 ] 00:25:22.236 }' 00:25:22.236 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:22.236 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:22.236 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:22.496 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:22.496 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:22.496 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.496 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:22.496 [2024-10-15 09:25:06.205536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:22.496 [2024-10-15 09:25:06.222365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:22.496 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.496 09:25:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:22.496 [2024-10-15 09:25:06.225066] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:23.431 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:23.431 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:23.431 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:23.431 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:23.431 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:23.431 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.431 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.431 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.431 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:23.431 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.431 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:23.431 "name": "raid_bdev1", 00:25:23.431 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:23.431 "strip_size_kb": 0, 00:25:23.431 "state": "online", 00:25:23.431 "raid_level": "raid1", 00:25:23.431 "superblock": true, 00:25:23.431 "num_base_bdevs": 2, 00:25:23.431 "num_base_bdevs_discovered": 2, 00:25:23.431 "num_base_bdevs_operational": 2, 00:25:23.431 "process": { 00:25:23.431 "type": "rebuild", 00:25:23.431 "target": "spare", 00:25:23.431 "progress": { 00:25:23.431 "blocks": 2560, 00:25:23.431 "percent": 32 00:25:23.431 } 00:25:23.431 }, 00:25:23.431 "base_bdevs_list": [ 00:25:23.431 { 00:25:23.431 "name": "spare", 00:25:23.431 "uuid": "810d2799-8878-54ea-9789-0a8d05e19d7b", 00:25:23.431 "is_configured": true, 00:25:23.431 "data_offset": 256, 00:25:23.431 "data_size": 7936 00:25:23.431 }, 00:25:23.431 { 00:25:23.431 "name": "BaseBdev2", 00:25:23.431 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:23.431 "is_configured": true, 00:25:23.431 "data_offset": 256, 00:25:23.431 "data_size": 7936 00:25:23.431 } 00:25:23.431 ] 00:25:23.431 }' 00:25:23.431 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:25:23.690 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=818 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:23.690 "name": "raid_bdev1", 00:25:23.690 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:23.690 "strip_size_kb": 0, 00:25:23.690 "state": "online", 00:25:23.690 "raid_level": "raid1", 00:25:23.690 "superblock": true, 00:25:23.690 "num_base_bdevs": 2, 00:25:23.690 "num_base_bdevs_discovered": 2, 00:25:23.690 "num_base_bdevs_operational": 2, 00:25:23.690 "process": { 00:25:23.690 "type": "rebuild", 00:25:23.690 "target": "spare", 00:25:23.690 "progress": { 00:25:23.690 "blocks": 2816, 00:25:23.690 "percent": 35 00:25:23.690 } 00:25:23.690 }, 00:25:23.690 "base_bdevs_list": [ 00:25:23.690 { 00:25:23.690 "name": "spare", 00:25:23.690 "uuid": "810d2799-8878-54ea-9789-0a8d05e19d7b", 00:25:23.690 "is_configured": true, 00:25:23.690 "data_offset": 256, 00:25:23.690 "data_size": 7936 00:25:23.690 }, 00:25:23.690 { 00:25:23.690 "name": "BaseBdev2", 00:25:23.690 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:23.690 "is_configured": true, 00:25:23.690 "data_offset": 256, 00:25:23.690 "data_size": 7936 00:25:23.690 } 00:25:23.690 ] 00:25:23.690 }' 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:23.690 09:25:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:25.103 09:25:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:25.103 09:25:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:25.103 09:25:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:25.103 09:25:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:25.103 09:25:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:25.103 09:25:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:25.103 09:25:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.104 09:25:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.104 09:25:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.104 09:25:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:25.104 09:25:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.104 09:25:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:25.104 "name": "raid_bdev1", 00:25:25.104 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:25.104 "strip_size_kb": 0, 00:25:25.104 "state": "online", 00:25:25.104 "raid_level": "raid1", 00:25:25.104 "superblock": true, 00:25:25.104 "num_base_bdevs": 2, 00:25:25.104 "num_base_bdevs_discovered": 2, 00:25:25.104 "num_base_bdevs_operational": 2, 00:25:25.104 "process": { 00:25:25.104 "type": "rebuild", 00:25:25.104 "target": "spare", 00:25:25.104 "progress": { 00:25:25.104 "blocks": 5888, 00:25:25.104 "percent": 74 00:25:25.104 } 00:25:25.104 }, 00:25:25.104 "base_bdevs_list": [ 00:25:25.104 { 00:25:25.104 "name": "spare", 00:25:25.104 "uuid": "810d2799-8878-54ea-9789-0a8d05e19d7b", 00:25:25.104 "is_configured": true, 00:25:25.104 "data_offset": 256, 00:25:25.104 "data_size": 7936 00:25:25.104 }, 00:25:25.104 { 00:25:25.104 "name": "BaseBdev2", 00:25:25.104 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:25.104 "is_configured": true, 00:25:25.104 "data_offset": 256, 00:25:25.104 "data_size": 7936 00:25:25.104 } 00:25:25.104 ] 00:25:25.104 }' 00:25:25.104 09:25:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:25.104 09:25:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:25.104 09:25:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:25.104 09:25:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:25.104 09:25:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:25.671 [2024-10-15 09:25:09.353742] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:25.671 [2024-10-15 09:25:09.353878] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:25.671 [2024-10-15 09:25:09.354098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:25.930 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:25.930 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:25.930 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:25.930 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:25.930 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:25.930 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:25.930 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.930 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.930 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.930 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:25.930 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.930 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:25.930 "name": "raid_bdev1", 00:25:25.930 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:25.930 "strip_size_kb": 0, 00:25:25.930 "state": "online", 00:25:25.930 "raid_level": "raid1", 00:25:25.930 "superblock": true, 00:25:25.930 "num_base_bdevs": 2, 00:25:25.930 "num_base_bdevs_discovered": 2, 00:25:25.930 "num_base_bdevs_operational": 2, 00:25:25.930 "base_bdevs_list": [ 00:25:25.930 { 00:25:25.930 "name": "spare", 00:25:25.930 "uuid": "810d2799-8878-54ea-9789-0a8d05e19d7b", 00:25:25.930 "is_configured": true, 00:25:25.930 "data_offset": 256, 00:25:25.930 "data_size": 7936 00:25:25.930 }, 00:25:25.930 { 00:25:25.930 "name": "BaseBdev2", 00:25:25.930 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:25.930 "is_configured": true, 00:25:25.930 "data_offset": 256, 00:25:25.930 "data_size": 7936 00:25:25.930 } 00:25:25.930 ] 00:25:25.930 }' 00:25:25.930 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:26.193 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:26.193 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:26.193 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:26.193 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:25:26.193 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:26.193 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:26.193 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:26.193 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:26.193 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:26.193 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.193 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.193 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.193 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.193 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.193 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:26.193 "name": "raid_bdev1", 00:25:26.193 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:26.193 "strip_size_kb": 0, 00:25:26.193 "state": "online", 00:25:26.193 "raid_level": "raid1", 00:25:26.193 "superblock": true, 00:25:26.193 "num_base_bdevs": 2, 00:25:26.193 "num_base_bdevs_discovered": 2, 00:25:26.193 "num_base_bdevs_operational": 2, 00:25:26.193 "base_bdevs_list": [ 00:25:26.193 { 00:25:26.193 "name": "spare", 00:25:26.193 "uuid": "810d2799-8878-54ea-9789-0a8d05e19d7b", 00:25:26.193 "is_configured": true, 00:25:26.193 "data_offset": 256, 00:25:26.193 "data_size": 7936 00:25:26.193 }, 00:25:26.193 { 00:25:26.193 "name": "BaseBdev2", 00:25:26.193 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:26.193 "is_configured": true, 00:25:26.193 "data_offset": 256, 00:25:26.193 "data_size": 7936 00:25:26.193 } 00:25:26.193 ] 00:25:26.193 }' 00:25:26.193 09:25:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:26.193 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:26.193 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:26.193 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:26.193 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:26.193 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:26.193 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:26.193 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:26.193 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:26.193 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:26.193 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:26.193 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:26.193 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:26.193 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:26.193 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.193 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.193 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.193 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.452 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.452 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:26.452 "name": "raid_bdev1", 00:25:26.452 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:26.452 "strip_size_kb": 0, 00:25:26.452 "state": "online", 00:25:26.452 "raid_level": "raid1", 00:25:26.452 "superblock": true, 00:25:26.452 "num_base_bdevs": 2, 00:25:26.452 "num_base_bdevs_discovered": 2, 00:25:26.452 "num_base_bdevs_operational": 2, 00:25:26.452 "base_bdevs_list": [ 00:25:26.452 { 00:25:26.452 "name": "spare", 00:25:26.452 "uuid": "810d2799-8878-54ea-9789-0a8d05e19d7b", 00:25:26.452 "is_configured": true, 00:25:26.452 "data_offset": 256, 00:25:26.452 "data_size": 7936 00:25:26.452 }, 00:25:26.452 { 00:25:26.452 "name": "BaseBdev2", 00:25:26.452 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:26.452 "is_configured": true, 00:25:26.452 "data_offset": 256, 00:25:26.452 "data_size": 7936 00:25:26.452 } 00:25:26.452 ] 00:25:26.452 }' 00:25:26.452 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:26.452 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.711 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:26.711 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.711 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.970 [2024-10-15 09:25:10.638234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:26.970 [2024-10-15 09:25:10.638283] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:26.970 [2024-10-15 09:25:10.638434] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:26.970 [2024-10-15 09:25:10.638561] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:26.970 [2024-10-15 09:25:10.638581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.970 [2024-10-15 09:25:10.730213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:26.970 [2024-10-15 09:25:10.730291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.970 [2024-10-15 09:25:10.730323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:26.970 [2024-10-15 09:25:10.730339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.970 [2024-10-15 09:25:10.733196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.970 [2024-10-15 09:25:10.733239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:26.970 [2024-10-15 09:25:10.733331] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:26.970 [2024-10-15 09:25:10.733397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:26.970 [2024-10-15 09:25:10.733540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:26.970 spare 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.970 [2024-10-15 09:25:10.833677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:26.970 [2024-10-15 09:25:10.833770] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:26.970 [2024-10-15 09:25:10.833973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:26.970 [2024-10-15 09:25:10.834129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:26.970 [2024-10-15 09:25:10.834144] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:26.970 [2024-10-15 09:25:10.834369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:26.970 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:26.971 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:26.971 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:26.971 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:26.971 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:26.971 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:26.971 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:26.971 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.971 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.971 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.971 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.971 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.971 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:26.971 "name": "raid_bdev1", 00:25:26.971 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:26.971 "strip_size_kb": 0, 00:25:26.971 "state": "online", 00:25:26.971 "raid_level": "raid1", 00:25:26.971 "superblock": true, 00:25:26.971 "num_base_bdevs": 2, 00:25:26.971 "num_base_bdevs_discovered": 2, 00:25:26.971 "num_base_bdevs_operational": 2, 00:25:26.971 "base_bdevs_list": [ 00:25:26.971 { 00:25:26.971 "name": "spare", 00:25:26.971 "uuid": "810d2799-8878-54ea-9789-0a8d05e19d7b", 00:25:26.971 "is_configured": true, 00:25:26.971 "data_offset": 256, 00:25:26.971 "data_size": 7936 00:25:26.971 }, 00:25:26.971 { 00:25:26.971 "name": "BaseBdev2", 00:25:26.971 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:26.971 "is_configured": true, 00:25:26.971 "data_offset": 256, 00:25:26.971 "data_size": 7936 00:25:26.971 } 00:25:26.971 ] 00:25:26.971 }' 00:25:26.971 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:26.971 09:25:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:27.540 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:27.540 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:27.540 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:27.540 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:27.540 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:27.540 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.540 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.540 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:27.540 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.540 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.540 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:27.540 "name": "raid_bdev1", 00:25:27.540 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:27.540 "strip_size_kb": 0, 00:25:27.540 "state": "online", 00:25:27.540 "raid_level": "raid1", 00:25:27.540 "superblock": true, 00:25:27.540 "num_base_bdevs": 2, 00:25:27.540 "num_base_bdevs_discovered": 2, 00:25:27.540 "num_base_bdevs_operational": 2, 00:25:27.540 "base_bdevs_list": [ 00:25:27.540 { 00:25:27.540 "name": "spare", 00:25:27.540 "uuid": "810d2799-8878-54ea-9789-0a8d05e19d7b", 00:25:27.540 "is_configured": true, 00:25:27.540 "data_offset": 256, 00:25:27.540 "data_size": 7936 00:25:27.540 }, 00:25:27.540 { 00:25:27.540 "name": "BaseBdev2", 00:25:27.540 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:27.540 "is_configured": true, 00:25:27.540 "data_offset": 256, 00:25:27.540 "data_size": 7936 00:25:27.540 } 00:25:27.540 ] 00:25:27.540 }' 00:25:27.540 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:27.799 [2024-10-15 09:25:11.590593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:27.799 "name": "raid_bdev1", 00:25:27.799 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:27.799 "strip_size_kb": 0, 00:25:27.799 "state": "online", 00:25:27.799 "raid_level": "raid1", 00:25:27.799 "superblock": true, 00:25:27.799 "num_base_bdevs": 2, 00:25:27.799 "num_base_bdevs_discovered": 1, 00:25:27.799 "num_base_bdevs_operational": 1, 00:25:27.799 "base_bdevs_list": [ 00:25:27.799 { 00:25:27.799 "name": null, 00:25:27.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.799 "is_configured": false, 00:25:27.799 "data_offset": 0, 00:25:27.799 "data_size": 7936 00:25:27.799 }, 00:25:27.799 { 00:25:27.799 "name": "BaseBdev2", 00:25:27.799 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:27.799 "is_configured": true, 00:25:27.799 "data_offset": 256, 00:25:27.799 "data_size": 7936 00:25:27.799 } 00:25:27.799 ] 00:25:27.799 }' 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:27.799 09:25:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:28.365 09:25:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:28.365 09:25:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.365 09:25:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:28.365 [2024-10-15 09:25:12.106831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:28.365 [2024-10-15 09:25:12.107302] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:28.365 [2024-10-15 09:25:12.107486] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:28.365 [2024-10-15 09:25:12.107636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:28.365 [2024-10-15 09:25:12.124968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:25:28.365 09:25:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.365 09:25:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:25:28.365 [2024-10-15 09:25:12.127808] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:29.351 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:29.351 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:29.351 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:29.351 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:29.351 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:29.351 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.351 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.351 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.351 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.351 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.351 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:29.351 "name": "raid_bdev1", 00:25:29.351 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:29.351 "strip_size_kb": 0, 00:25:29.351 "state": "online", 00:25:29.351 "raid_level": "raid1", 00:25:29.351 "superblock": true, 00:25:29.351 "num_base_bdevs": 2, 00:25:29.351 "num_base_bdevs_discovered": 2, 00:25:29.351 "num_base_bdevs_operational": 2, 00:25:29.351 "process": { 00:25:29.351 "type": "rebuild", 00:25:29.351 "target": "spare", 00:25:29.351 "progress": { 00:25:29.351 "blocks": 2560, 00:25:29.351 "percent": 32 00:25:29.351 } 00:25:29.351 }, 00:25:29.351 "base_bdevs_list": [ 00:25:29.351 { 00:25:29.351 "name": "spare", 00:25:29.351 "uuid": "810d2799-8878-54ea-9789-0a8d05e19d7b", 00:25:29.351 "is_configured": true, 00:25:29.351 "data_offset": 256, 00:25:29.351 "data_size": 7936 00:25:29.351 }, 00:25:29.351 { 00:25:29.351 "name": "BaseBdev2", 00:25:29.351 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:29.351 "is_configured": true, 00:25:29.351 "data_offset": 256, 00:25:29.351 "data_size": 7936 00:25:29.351 } 00:25:29.351 ] 00:25:29.351 }' 00:25:29.351 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:29.351 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:29.351 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.610 [2024-10-15 09:25:13.297534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:29.610 [2024-10-15 09:25:13.339103] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:29.610 [2024-10-15 09:25:13.339520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.610 [2024-10-15 09:25:13.339683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:29.610 [2024-10-15 09:25:13.339808] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:29.610 "name": "raid_bdev1", 00:25:29.610 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:29.610 "strip_size_kb": 0, 00:25:29.610 "state": "online", 00:25:29.610 "raid_level": "raid1", 00:25:29.610 "superblock": true, 00:25:29.610 "num_base_bdevs": 2, 00:25:29.610 "num_base_bdevs_discovered": 1, 00:25:29.610 "num_base_bdevs_operational": 1, 00:25:29.610 "base_bdevs_list": [ 00:25:29.610 { 00:25:29.610 "name": null, 00:25:29.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.610 "is_configured": false, 00:25:29.610 "data_offset": 0, 00:25:29.610 "data_size": 7936 00:25:29.610 }, 00:25:29.610 { 00:25:29.610 "name": "BaseBdev2", 00:25:29.610 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:29.610 "is_configured": true, 00:25:29.610 "data_offset": 256, 00:25:29.610 "data_size": 7936 00:25:29.610 } 00:25:29.610 ] 00:25:29.610 }' 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:29.610 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.177 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:30.177 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.177 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.177 [2024-10-15 09:25:13.884906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:30.177 [2024-10-15 09:25:13.885012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:30.177 [2024-10-15 09:25:13.885046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:25:30.177 [2024-10-15 09:25:13.885065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:30.177 [2024-10-15 09:25:13.885416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:30.177 [2024-10-15 09:25:13.885451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:30.177 [2024-10-15 09:25:13.885538] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:30.177 [2024-10-15 09:25:13.885561] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:30.177 [2024-10-15 09:25:13.885576] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:30.177 [2024-10-15 09:25:13.885607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:30.177 [2024-10-15 09:25:13.903092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:30.177 spare 00:25:30.177 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.177 09:25:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:25:30.177 [2024-10-15 09:25:13.905863] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:31.112 09:25:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:31.112 09:25:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:31.112 09:25:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:31.112 09:25:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:31.112 09:25:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:31.112 09:25:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.112 09:25:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.112 09:25:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.112 09:25:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.113 09:25:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.113 09:25:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:31.113 "name": "raid_bdev1", 00:25:31.113 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:31.113 "strip_size_kb": 0, 00:25:31.113 "state": "online", 00:25:31.113 "raid_level": "raid1", 00:25:31.113 "superblock": true, 00:25:31.113 "num_base_bdevs": 2, 00:25:31.113 "num_base_bdevs_discovered": 2, 00:25:31.113 "num_base_bdevs_operational": 2, 00:25:31.113 "process": { 00:25:31.113 "type": "rebuild", 00:25:31.113 "target": "spare", 00:25:31.113 "progress": { 00:25:31.113 "blocks": 2304, 00:25:31.113 "percent": 29 00:25:31.113 } 00:25:31.113 }, 00:25:31.113 "base_bdevs_list": [ 00:25:31.113 { 00:25:31.113 "name": "spare", 00:25:31.113 "uuid": "810d2799-8878-54ea-9789-0a8d05e19d7b", 00:25:31.113 "is_configured": true, 00:25:31.113 "data_offset": 256, 00:25:31.113 "data_size": 7936 00:25:31.113 }, 00:25:31.113 { 00:25:31.113 "name": "BaseBdev2", 00:25:31.113 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:31.113 "is_configured": true, 00:25:31.113 "data_offset": 256, 00:25:31.113 "data_size": 7936 00:25:31.113 } 00:25:31.113 ] 00:25:31.113 }' 00:25:31.113 09:25:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:31.113 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:31.113 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.371 [2024-10-15 09:25:15.052103] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:31.371 [2024-10-15 09:25:15.118219] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:31.371 [2024-10-15 09:25:15.118361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:31.371 [2024-10-15 09:25:15.118403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:31.371 [2024-10-15 09:25:15.118415] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:31.371 "name": "raid_bdev1", 00:25:31.371 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:31.371 "strip_size_kb": 0, 00:25:31.371 "state": "online", 00:25:31.371 "raid_level": "raid1", 00:25:31.371 "superblock": true, 00:25:31.371 "num_base_bdevs": 2, 00:25:31.371 "num_base_bdevs_discovered": 1, 00:25:31.371 "num_base_bdevs_operational": 1, 00:25:31.371 "base_bdevs_list": [ 00:25:31.371 { 00:25:31.371 "name": null, 00:25:31.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.371 "is_configured": false, 00:25:31.371 "data_offset": 0, 00:25:31.371 "data_size": 7936 00:25:31.371 }, 00:25:31.371 { 00:25:31.371 "name": "BaseBdev2", 00:25:31.371 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:31.371 "is_configured": true, 00:25:31.371 "data_offset": 256, 00:25:31.371 "data_size": 7936 00:25:31.371 } 00:25:31.371 ] 00:25:31.371 }' 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:31.371 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.938 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:31.938 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:31.938 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:31.938 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:31.938 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:31.938 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.938 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.938 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.939 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.939 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.939 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:31.939 "name": "raid_bdev1", 00:25:31.939 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:31.939 "strip_size_kb": 0, 00:25:31.939 "state": "online", 00:25:31.939 "raid_level": "raid1", 00:25:31.939 "superblock": true, 00:25:31.939 "num_base_bdevs": 2, 00:25:31.939 "num_base_bdevs_discovered": 1, 00:25:31.939 "num_base_bdevs_operational": 1, 00:25:31.939 "base_bdevs_list": [ 00:25:31.939 { 00:25:31.939 "name": null, 00:25:31.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.939 "is_configured": false, 00:25:31.939 "data_offset": 0, 00:25:31.939 "data_size": 7936 00:25:31.939 }, 00:25:31.939 { 00:25:31.939 "name": "BaseBdev2", 00:25:31.939 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:31.939 "is_configured": true, 00:25:31.939 "data_offset": 256, 00:25:31.939 "data_size": 7936 00:25:31.939 } 00:25:31.939 ] 00:25:31.939 }' 00:25:31.939 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:31.939 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:31.939 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:31.939 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:31.939 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:25:31.939 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.939 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.939 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.939 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:31.939 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.939 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.939 [2024-10-15 09:25:15.801656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:31.939 [2024-10-15 09:25:15.801745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.939 [2024-10-15 09:25:15.801782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:31.939 [2024-10-15 09:25:15.801797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.939 [2024-10-15 09:25:15.802044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.939 [2024-10-15 09:25:15.802065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:31.939 [2024-10-15 09:25:15.802174] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:31.939 [2024-10-15 09:25:15.802197] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:31.939 [2024-10-15 09:25:15.802211] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:31.939 [2024-10-15 09:25:15.802231] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:25:31.939 BaseBdev1 00:25:31.939 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.939 09:25:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:33.315 09:25:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:33.315 09:25:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:33.315 09:25:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:33.315 09:25:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:33.315 09:25:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:33.315 09:25:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:33.315 09:25:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:33.315 09:25:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:33.315 09:25:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:33.315 09:25:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:33.315 09:25:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:33.315 09:25:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.315 09:25:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:33.315 09:25:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.315 09:25:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.315 09:25:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:33.315 "name": "raid_bdev1", 00:25:33.315 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:33.315 "strip_size_kb": 0, 00:25:33.315 "state": "online", 00:25:33.315 "raid_level": "raid1", 00:25:33.315 "superblock": true, 00:25:33.315 "num_base_bdevs": 2, 00:25:33.315 "num_base_bdevs_discovered": 1, 00:25:33.315 "num_base_bdevs_operational": 1, 00:25:33.315 "base_bdevs_list": [ 00:25:33.315 { 00:25:33.315 "name": null, 00:25:33.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.315 "is_configured": false, 00:25:33.315 "data_offset": 0, 00:25:33.315 "data_size": 7936 00:25:33.315 }, 00:25:33.315 { 00:25:33.315 "name": "BaseBdev2", 00:25:33.315 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:33.315 "is_configured": true, 00:25:33.315 "data_offset": 256, 00:25:33.315 "data_size": 7936 00:25:33.315 } 00:25:33.315 ] 00:25:33.315 }' 00:25:33.315 09:25:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:33.315 09:25:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:33.608 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:33.608 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:33.608 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:33.608 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:33.608 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:33.608 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:33.608 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.608 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:33.608 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.608 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.608 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:33.608 "name": "raid_bdev1", 00:25:33.608 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:33.608 "strip_size_kb": 0, 00:25:33.608 "state": "online", 00:25:33.608 "raid_level": "raid1", 00:25:33.608 "superblock": true, 00:25:33.608 "num_base_bdevs": 2, 00:25:33.608 "num_base_bdevs_discovered": 1, 00:25:33.608 "num_base_bdevs_operational": 1, 00:25:33.608 "base_bdevs_list": [ 00:25:33.608 { 00:25:33.608 "name": null, 00:25:33.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.608 "is_configured": false, 00:25:33.608 "data_offset": 0, 00:25:33.608 "data_size": 7936 00:25:33.608 }, 00:25:33.608 { 00:25:33.608 "name": "BaseBdev2", 00:25:33.608 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:33.608 "is_configured": true, 00:25:33.609 "data_offset": 256, 00:25:33.609 "data_size": 7936 00:25:33.609 } 00:25:33.609 ] 00:25:33.609 }' 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:33.609 [2024-10-15 09:25:17.470359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:33.609 [2024-10-15 09:25:17.470728] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:33.609 [2024-10-15 09:25:17.470768] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:33.609 request: 00:25:33.609 { 00:25:33.609 "base_bdev": "BaseBdev1", 00:25:33.609 "raid_bdev": "raid_bdev1", 00:25:33.609 "method": "bdev_raid_add_base_bdev", 00:25:33.609 "req_id": 1 00:25:33.609 } 00:25:33.609 Got JSON-RPC error response 00:25:33.609 response: 00:25:33.609 { 00:25:33.609 "code": -22, 00:25:33.609 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:33.609 } 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:33.609 09:25:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:34.564 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:34.564 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:34.564 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:34.564 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:34.564 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:34.564 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:34.564 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:34.564 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:34.564 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:34.564 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:34.564 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:34.564 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.564 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.564 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:34.823 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.823 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:34.823 "name": "raid_bdev1", 00:25:34.823 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:34.823 "strip_size_kb": 0, 00:25:34.823 "state": "online", 00:25:34.823 "raid_level": "raid1", 00:25:34.823 "superblock": true, 00:25:34.823 "num_base_bdevs": 2, 00:25:34.823 "num_base_bdevs_discovered": 1, 00:25:34.823 "num_base_bdevs_operational": 1, 00:25:34.823 "base_bdevs_list": [ 00:25:34.823 { 00:25:34.823 "name": null, 00:25:34.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:34.823 "is_configured": false, 00:25:34.823 "data_offset": 0, 00:25:34.823 "data_size": 7936 00:25:34.823 }, 00:25:34.823 { 00:25:34.823 "name": "BaseBdev2", 00:25:34.823 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:34.823 "is_configured": true, 00:25:34.823 "data_offset": 256, 00:25:34.823 "data_size": 7936 00:25:34.823 } 00:25:34.823 ] 00:25:34.823 }' 00:25:34.823 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:34.823 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.082 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:35.082 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:35.082 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:35.082 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:35.082 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:35.082 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.082 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.082 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.082 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.082 09:25:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.341 09:25:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:35.341 "name": "raid_bdev1", 00:25:35.341 "uuid": "c84b9a9f-223b-4926-a4a5-aaf9c149c18c", 00:25:35.341 "strip_size_kb": 0, 00:25:35.341 "state": "online", 00:25:35.341 "raid_level": "raid1", 00:25:35.341 "superblock": true, 00:25:35.341 "num_base_bdevs": 2, 00:25:35.341 "num_base_bdevs_discovered": 1, 00:25:35.341 "num_base_bdevs_operational": 1, 00:25:35.341 "base_bdevs_list": [ 00:25:35.341 { 00:25:35.341 "name": null, 00:25:35.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.341 "is_configured": false, 00:25:35.341 "data_offset": 0, 00:25:35.341 "data_size": 7936 00:25:35.341 }, 00:25:35.341 { 00:25:35.341 "name": "BaseBdev2", 00:25:35.341 "uuid": "4cacaa06-4f8b-5f05-8176-d02fee211e4d", 00:25:35.341 "is_configured": true, 00:25:35.341 "data_offset": 256, 00:25:35.341 "data_size": 7936 00:25:35.341 } 00:25:35.341 ] 00:25:35.341 }' 00:25:35.341 09:25:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:35.341 09:25:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:35.341 09:25:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:35.341 09:25:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:35.341 09:25:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89827 00:25:35.341 09:25:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89827 ']' 00:25:35.341 09:25:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89827 00:25:35.341 09:25:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:25:35.341 09:25:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:35.341 09:25:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89827 00:25:35.341 killing process with pid 89827 00:25:35.341 Received shutdown signal, test time was about 60.000000 seconds 00:25:35.341 00:25:35.341 Latency(us) 00:25:35.341 [2024-10-15T09:25:19.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.341 [2024-10-15T09:25:19.269Z] =================================================================================================================== 00:25:35.341 [2024-10-15T09:25:19.269Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:35.341 09:25:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:35.341 09:25:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:35.341 09:25:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89827' 00:25:35.341 09:25:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 89827 00:25:35.341 [2024-10-15 09:25:19.166362] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:35.341 09:25:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 89827 00:25:35.341 [2024-10-15 09:25:19.166556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:35.341 [2024-10-15 09:25:19.166625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:35.341 [2024-10-15 09:25:19.166644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:35.600 [2024-10-15 09:25:19.457958] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:36.975 09:25:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:25:36.975 00:25:36.975 real 0m18.869s 00:25:36.975 user 0m25.719s 00:25:36.975 sys 0m1.513s 00:25:36.975 09:25:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:36.975 09:25:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.975 ************************************ 00:25:36.975 END TEST raid_rebuild_test_sb_md_interleaved 00:25:36.975 ************************************ 00:25:36.975 09:25:20 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:25:36.975 09:25:20 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:25:36.975 09:25:20 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89827 ']' 00:25:36.975 09:25:20 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89827 00:25:36.975 09:25:20 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:25:36.975 00:25:36.975 real 13m20.737s 00:25:36.975 user 18m40.915s 00:25:36.975 sys 1m53.868s 00:25:36.975 09:25:20 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:36.975 ************************************ 00:25:36.975 END TEST bdev_raid 00:25:36.975 ************************************ 00:25:36.975 09:25:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:36.975 09:25:20 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:25:36.975 09:25:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:36.975 09:25:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:36.975 09:25:20 -- common/autotest_common.sh@10 -- # set +x 00:25:36.975 ************************************ 00:25:36.975 START TEST spdkcli_raid 00:25:36.975 ************************************ 00:25:36.975 09:25:20 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:25:36.975 * Looking for test storage... 00:25:36.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:36.975 09:25:20 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:36.975 09:25:20 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:36.975 09:25:20 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:25:36.975 09:25:20 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:36.975 09:25:20 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:36.975 09:25:20 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:36.975 09:25:20 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:36.975 09:25:20 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:25:36.975 09:25:20 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:25:36.975 09:25:20 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:25:36.975 09:25:20 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:25:36.975 09:25:20 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:25:36.975 09:25:20 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:25:36.975 09:25:20 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:25:36.975 09:25:20 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:36.975 09:25:20 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:25:36.975 09:25:20 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:25:36.975 09:25:20 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:36.975 09:25:20 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:36.976 09:25:20 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:25:36.976 09:25:20 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:25:36.976 09:25:20 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:36.976 09:25:20 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:25:36.976 09:25:20 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:25:37.235 09:25:20 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:25:37.235 09:25:20 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:25:37.235 09:25:20 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:37.235 09:25:20 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:25:37.235 09:25:20 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:25:37.235 09:25:20 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:37.235 09:25:20 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:37.235 09:25:20 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:25:37.235 09:25:20 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:37.235 09:25:20 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:37.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.235 --rc genhtml_branch_coverage=1 00:25:37.235 --rc genhtml_function_coverage=1 00:25:37.235 --rc genhtml_legend=1 00:25:37.235 --rc geninfo_all_blocks=1 00:25:37.235 --rc geninfo_unexecuted_blocks=1 00:25:37.235 00:25:37.235 ' 00:25:37.235 09:25:20 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:37.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.235 --rc genhtml_branch_coverage=1 00:25:37.235 --rc genhtml_function_coverage=1 00:25:37.235 --rc genhtml_legend=1 00:25:37.235 --rc geninfo_all_blocks=1 00:25:37.235 --rc geninfo_unexecuted_blocks=1 00:25:37.235 00:25:37.235 ' 00:25:37.235 09:25:20 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:37.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.235 --rc genhtml_branch_coverage=1 00:25:37.235 --rc genhtml_function_coverage=1 00:25:37.235 --rc genhtml_legend=1 00:25:37.235 --rc geninfo_all_blocks=1 00:25:37.235 --rc geninfo_unexecuted_blocks=1 00:25:37.235 00:25:37.235 ' 00:25:37.235 09:25:20 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:37.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.235 --rc genhtml_branch_coverage=1 00:25:37.235 --rc genhtml_function_coverage=1 00:25:37.235 --rc genhtml_legend=1 00:25:37.235 --rc geninfo_all_blocks=1 00:25:37.235 --rc geninfo_unexecuted_blocks=1 00:25:37.235 00:25:37.235 ' 00:25:37.235 09:25:20 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:37.235 09:25:20 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:37.235 09:25:20 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:37.235 09:25:20 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:25:37.235 09:25:20 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:25:37.235 09:25:20 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:25:37.235 09:25:20 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:25:37.235 09:25:20 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:25:37.235 09:25:20 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:37.235 09:25:20 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:37.235 09:25:20 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:37.235 09:25:20 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:37.235 09:25:20 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:37.235 09:25:20 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:25:37.235 09:25:20 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:25:37.235 09:25:20 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:37.235 09:25:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:37.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.235 09:25:20 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:25:37.235 09:25:20 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90513 00:25:37.235 09:25:20 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90513 00:25:37.235 09:25:20 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 90513 ']' 00:25:37.235 09:25:20 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:25:37.235 09:25:20 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.235 09:25:20 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:37.235 09:25:20 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.235 09:25:20 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:37.235 09:25:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:37.235 [2024-10-15 09:25:21.108476] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:25:37.235 [2024-10-15 09:25:21.108666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90513 ] 00:25:37.494 [2024-10-15 09:25:21.289124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:37.753 [2024-10-15 09:25:21.440841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.753 [2024-10-15 09:25:21.440841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.690 09:25:22 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:38.690 09:25:22 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:25:38.690 09:25:22 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:25:38.690 09:25:22 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:38.690 09:25:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:38.690 09:25:22 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:25:38.690 09:25:22 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:38.690 09:25:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:38.690 09:25:22 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:38.690 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:38.690 ' 00:25:40.601 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:25:40.601 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:25:40.601 09:25:24 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:25:40.602 09:25:24 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:40.602 09:25:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:40.602 09:25:24 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:25:40.602 09:25:24 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:40.602 09:25:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:40.602 09:25:24 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:25:40.602 ' 00:25:41.537 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:25:41.537 09:25:25 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:25:41.537 09:25:25 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:41.537 09:25:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:41.537 09:25:25 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:25:41.537 09:25:25 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:41.537 09:25:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:41.537 09:25:25 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:25:41.537 09:25:25 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:25:42.104 09:25:25 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:25:42.363 09:25:26 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:25:42.363 09:25:26 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:25:42.363 09:25:26 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:42.363 09:25:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:42.363 09:25:26 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:25:42.363 09:25:26 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:42.363 09:25:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:42.363 09:25:26 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:25:42.363 ' 00:25:43.298 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:25:43.298 09:25:27 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:25:43.298 09:25:27 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:43.298 09:25:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:43.298 09:25:27 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:25:43.298 09:25:27 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:43.298 09:25:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:43.298 09:25:27 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:25:43.298 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:25:43.298 ' 00:25:44.738 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:25:44.738 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:25:44.996 09:25:28 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:25:44.996 09:25:28 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:44.996 09:25:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:44.996 09:25:28 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90513 00:25:44.996 09:25:28 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 90513 ']' 00:25:44.996 09:25:28 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 90513 00:25:44.996 09:25:28 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:25:44.996 09:25:28 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:44.996 09:25:28 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90513 00:25:44.996 09:25:28 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:44.996 killing process with pid 90513 00:25:44.996 09:25:28 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:44.996 09:25:28 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90513' 00:25:44.996 09:25:28 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 90513 00:25:44.996 09:25:28 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 90513 00:25:47.592 09:25:31 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:25:47.592 09:25:31 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90513 ']' 00:25:47.592 09:25:31 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90513 00:25:47.592 09:25:31 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 90513 ']' 00:25:47.592 09:25:31 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 90513 00:25:47.592 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (90513) - No such process 00:25:47.592 Process with pid 90513 is not found 00:25:47.592 09:25:31 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 90513 is not found' 00:25:47.592 09:25:31 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:25:47.592 09:25:31 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:47.592 09:25:31 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:47.592 09:25:31 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:47.592 00:25:47.592 real 0m10.506s 00:25:47.592 user 0m21.681s 00:25:47.592 sys 0m1.227s 00:25:47.592 09:25:31 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:47.592 09:25:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:47.592 ************************************ 00:25:47.592 END TEST spdkcli_raid 00:25:47.592 ************************************ 00:25:47.592 09:25:31 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:25:47.592 09:25:31 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:47.592 09:25:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:47.592 09:25:31 -- common/autotest_common.sh@10 -- # set +x 00:25:47.592 ************************************ 00:25:47.592 START TEST blockdev_raid5f 00:25:47.592 ************************************ 00:25:47.592 09:25:31 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:25:47.592 * Looking for test storage... 00:25:47.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:25:47.592 09:25:31 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:47.592 09:25:31 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:25:47.592 09:25:31 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:47.592 09:25:31 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.592 09:25:31 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:25:47.592 09:25:31 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.592 09:25:31 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:47.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.592 --rc genhtml_branch_coverage=1 00:25:47.592 --rc genhtml_function_coverage=1 00:25:47.592 --rc genhtml_legend=1 00:25:47.592 --rc geninfo_all_blocks=1 00:25:47.592 --rc geninfo_unexecuted_blocks=1 00:25:47.592 00:25:47.592 ' 00:25:47.592 09:25:31 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:47.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.593 --rc genhtml_branch_coverage=1 00:25:47.593 --rc genhtml_function_coverage=1 00:25:47.593 --rc genhtml_legend=1 00:25:47.593 --rc geninfo_all_blocks=1 00:25:47.593 --rc geninfo_unexecuted_blocks=1 00:25:47.593 00:25:47.593 ' 00:25:47.593 09:25:31 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:47.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.593 --rc genhtml_branch_coverage=1 00:25:47.593 --rc genhtml_function_coverage=1 00:25:47.593 --rc genhtml_legend=1 00:25:47.593 --rc geninfo_all_blocks=1 00:25:47.593 --rc geninfo_unexecuted_blocks=1 00:25:47.593 00:25:47.593 ' 00:25:47.593 09:25:31 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:47.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.593 --rc genhtml_branch_coverage=1 00:25:47.593 --rc genhtml_function_coverage=1 00:25:47.593 --rc genhtml_legend=1 00:25:47.593 --rc geninfo_all_blocks=1 00:25:47.593 --rc geninfo_unexecuted_blocks=1 00:25:47.593 00:25:47.593 ' 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90792 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90792 00:25:47.593 09:25:31 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 90792 ']' 00:25:47.593 09:25:31 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:25:47.593 09:25:31 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.593 09:25:31 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:47.593 09:25:31 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.593 09:25:31 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:47.593 09:25:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:47.852 [2024-10-15 09:25:31.615560] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:25:47.852 [2024-10-15 09:25:31.616186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90792 ] 00:25:48.111 [2024-10-15 09:25:31.808049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.111 [2024-10-15 09:25:31.954229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.046 09:25:32 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:49.046 09:25:32 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:25:49.046 09:25:32 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:25:49.046 09:25:32 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:25:49.046 09:25:32 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:25:49.046 09:25:32 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.046 09:25:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:49.046 Malloc0 00:25:49.305 Malloc1 00:25:49.305 Malloc2 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.305 09:25:33 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.305 09:25:33 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:25:49.305 09:25:33 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.305 09:25:33 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.305 09:25:33 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.305 09:25:33 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:25:49.305 09:25:33 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:25:49.305 09:25:33 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.305 09:25:33 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:25:49.305 09:25:33 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:25:49.305 09:25:33 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "744f3aca-b2b8-44f7-b5b1-fe524c3f530c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "744f3aca-b2b8-44f7-b5b1-fe524c3f530c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "744f3aca-b2b8-44f7-b5b1-fe524c3f530c",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "835ed942-9fd3-41c7-8f38-c924a5c7e659",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "d8bb6090-6cad-4841-a1b5-f8839b699880",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "ebdb30f4-eaee-462d-b0d1-f7046af84275",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:25:49.305 09:25:33 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:25:49.305 09:25:33 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:25:49.305 09:25:33 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:25:49.305 09:25:33 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90792 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 90792 ']' 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 90792 00:25:49.305 09:25:33 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:25:49.564 09:25:33 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:49.564 09:25:33 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90792 00:25:49.564 killing process with pid 90792 00:25:49.564 09:25:33 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:49.564 09:25:33 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:49.564 09:25:33 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90792' 00:25:49.564 09:25:33 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 90792 00:25:49.564 09:25:33 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 90792 00:25:52.097 09:25:35 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:52.097 09:25:35 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:25:52.097 09:25:35 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:25:52.097 09:25:35 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:52.097 09:25:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:52.097 ************************************ 00:25:52.097 START TEST bdev_hello_world 00:25:52.097 ************************************ 00:25:52.097 09:25:35 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:25:52.356 [2024-10-15 09:25:36.052991] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:25:52.356 [2024-10-15 09:25:36.053389] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90858 ] 00:25:52.356 [2024-10-15 09:25:36.223723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.615 [2024-10-15 09:25:36.371333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.182 [2024-10-15 09:25:36.941078] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:25:53.182 [2024-10-15 09:25:36.941406] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:25:53.182 [2024-10-15 09:25:36.941443] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:25:53.182 [2024-10-15 09:25:36.942046] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:25:53.182 [2024-10-15 09:25:36.942233] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:25:53.182 [2024-10-15 09:25:36.942262] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:25:53.182 [2024-10-15 09:25:36.942333] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:25:53.182 00:25:53.182 [2024-10-15 09:25:36.942361] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:25:54.633 00:25:54.633 real 0m2.407s 00:25:54.633 user 0m1.938s 00:25:54.633 sys 0m0.342s 00:25:54.633 09:25:38 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:54.633 ************************************ 00:25:54.633 END TEST bdev_hello_world 00:25:54.633 ************************************ 00:25:54.633 09:25:38 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:25:54.633 09:25:38 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:25:54.633 09:25:38 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:54.633 09:25:38 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:54.633 09:25:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:54.633 ************************************ 00:25:54.633 START TEST bdev_bounds 00:25:54.633 ************************************ 00:25:54.633 09:25:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:25:54.633 Process bdevio pid: 90906 00:25:54.633 09:25:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90906 00:25:54.633 09:25:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:25:54.633 09:25:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:54.633 09:25:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90906' 00:25:54.633 09:25:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90906 00:25:54.633 09:25:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 90906 ']' 00:25:54.633 09:25:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.633 09:25:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:54.633 09:25:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.633 09:25:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:54.633 09:25:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:54.633 [2024-10-15 09:25:38.501431] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:25:54.633 [2024-10-15 09:25:38.501814] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90906 ] 00:25:54.891 [2024-10-15 09:25:38.688303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:55.149 [2024-10-15 09:25:38.865061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.149 [2024-10-15 09:25:38.865198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.149 [2024-10-15 09:25:38.865204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.716 09:25:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:55.716 09:25:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:25:55.716 09:25:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:25:55.974 I/O targets: 00:25:55.974 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:25:55.974 00:25:55.974 00:25:55.974 CUnit - A unit testing framework for C - Version 2.1-3 00:25:55.974 http://cunit.sourceforge.net/ 00:25:55.974 00:25:55.974 00:25:55.974 Suite: bdevio tests on: raid5f 00:25:55.974 Test: blockdev write read block ...passed 00:25:55.974 Test: blockdev write zeroes read block ...passed 00:25:55.974 Test: blockdev write zeroes read no split ...passed 00:25:55.974 Test: blockdev write zeroes read split ...passed 00:25:56.233 Test: blockdev write zeroes read split partial ...passed 00:25:56.233 Test: blockdev reset ...passed 00:25:56.233 Test: blockdev write read 8 blocks ...passed 00:25:56.233 Test: blockdev write read size > 128k ...passed 00:25:56.233 Test: blockdev write read invalid size ...passed 00:25:56.233 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:56.233 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:56.233 Test: blockdev write read max offset ...passed 00:25:56.233 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:56.233 Test: blockdev writev readv 8 blocks ...passed 00:25:56.233 Test: blockdev writev readv 30 x 1block ...passed 00:25:56.233 Test: blockdev writev readv block ...passed 00:25:56.233 Test: blockdev writev readv size > 128k ...passed 00:25:56.233 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:56.233 Test: blockdev comparev and writev ...passed 00:25:56.233 Test: blockdev nvme passthru rw ...passed 00:25:56.233 Test: blockdev nvme passthru vendor specific ...passed 00:25:56.233 Test: blockdev nvme admin passthru ...passed 00:25:56.233 Test: blockdev copy ...passed 00:25:56.233 00:25:56.233 Run Summary: Type Total Ran Passed Failed Inactive 00:25:56.233 suites 1 1 n/a 0 0 00:25:56.233 tests 23 23 23 0 0 00:25:56.233 asserts 130 130 130 0 n/a 00:25:56.233 00:25:56.233 Elapsed time = 0.600 seconds 00:25:56.233 0 00:25:56.233 09:25:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90906 00:25:56.233 09:25:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 90906 ']' 00:25:56.233 09:25:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 90906 00:25:56.233 09:25:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:25:56.233 09:25:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:56.233 09:25:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90906 00:25:56.233 09:25:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:56.233 09:25:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:56.233 killing process with pid 90906 00:25:56.233 09:25:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90906' 00:25:56.233 09:25:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 90906 00:25:56.233 09:25:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 90906 00:25:57.660 ************************************ 00:25:57.660 END TEST bdev_bounds 00:25:57.660 ************************************ 00:25:57.660 09:25:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:25:57.660 00:25:57.660 real 0m3.061s 00:25:57.660 user 0m7.618s 00:25:57.660 sys 0m0.506s 00:25:57.660 09:25:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:57.660 09:25:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:57.660 09:25:41 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:25:57.660 09:25:41 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:57.660 09:25:41 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:57.660 09:25:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:57.660 ************************************ 00:25:57.660 START TEST bdev_nbd 00:25:57.660 ************************************ 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90967 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90967 /var/tmp/spdk-nbd.sock 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 90967 ']' 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:57.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:57.660 09:25:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:25:57.919 [2024-10-15 09:25:41.608797] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:25:57.919 [2024-10-15 09:25:41.608972] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.919 [2024-10-15 09:25:41.778288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.178 [2024-10-15 09:25:41.980772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.746 09:25:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:58.746 09:25:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:25:58.746 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:25:58.746 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:58.746 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:25:58.746 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:25:58.746 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:25:58.746 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:58.746 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:25:58.746 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:25:58.746 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:25:58.746 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:25:58.746 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:25:58.746 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:25:58.746 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:59.005 1+0 records in 00:25:59.005 1+0 records out 00:25:59.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435986 s, 9.4 MB/s 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:25:59.005 09:25:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:59.573 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:25:59.573 { 00:25:59.573 "nbd_device": "/dev/nbd0", 00:25:59.573 "bdev_name": "raid5f" 00:25:59.573 } 00:25:59.573 ]' 00:25:59.573 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:25:59.573 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:25:59.573 { 00:25:59.573 "nbd_device": "/dev/nbd0", 00:25:59.573 "bdev_name": "raid5f" 00:25:59.573 } 00:25:59.573 ]' 00:25:59.573 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:25:59.573 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:59.573 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:59.573 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:59.573 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:59.573 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:59.573 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:59.573 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:59.832 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:59.832 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:59.832 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:59.832 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:59.832 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:59.832 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:59.832 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:59.832 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:59.832 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:59.832 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:59.832 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:00.091 09:25:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:26:00.659 /dev/nbd0 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:00.659 1+0 records in 00:26:00.659 1+0 records out 00:26:00.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513915 s, 8.0 MB/s 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:00.659 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:00.918 { 00:26:00.918 "nbd_device": "/dev/nbd0", 00:26:00.918 "bdev_name": "raid5f" 00:26:00.918 } 00:26:00.918 ]' 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:00.918 { 00:26:00.918 "nbd_device": "/dev/nbd0", 00:26:00.918 "bdev_name": "raid5f" 00:26:00.918 } 00:26:00.918 ]' 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:26:00.918 256+0 records in 00:26:00.918 256+0 records out 00:26:00.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107509 s, 97.5 MB/s 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:00.918 256+0 records in 00:26:00.918 256+0 records out 00:26:00.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0408668 s, 25.7 MB/s 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:00.918 09:25:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:01.177 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:01.177 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:01.177 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:01.177 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:01.177 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:01.177 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:01.177 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:01.177 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:01.177 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:01.177 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:01.177 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:01.745 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:01.745 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:01.745 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:01.745 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:01.745 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:01.745 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:01.745 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:01.745 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:01.745 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:01.745 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:26:01.745 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:01.745 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:26:01.745 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:01.745 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:01.745 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:26:01.745 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:26:02.004 malloc_lvol_verify 00:26:02.004 09:25:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:26:02.262 31f35c79-4b71-49d9-9e59-9b56287c1396 00:26:02.262 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:26:02.521 15dc8976-a697-45af-b9f0-fe4da1df304b 00:26:02.521 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:26:02.780 /dev/nbd0 00:26:02.780 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:26:02.780 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:26:02.780 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:26:02.780 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:26:02.780 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:26:02.780 mke2fs 1.47.0 (5-Feb-2023) 00:26:02.780 Discarding device blocks: 0/4096 done 00:26:02.780 Creating filesystem with 4096 1k blocks and 1024 inodes 00:26:02.780 00:26:02.780 Allocating group tables: 0/1 done 00:26:02.780 Writing inode tables: 0/1 done 00:26:02.780 Creating journal (1024 blocks): done 00:26:02.780 Writing superblocks and filesystem accounting information: 0/1 done 00:26:02.780 00:26:02.780 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:02.780 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:02.780 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:02.780 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:02.780 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:02.780 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:02.780 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90967 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 90967 ']' 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 90967 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90967 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:03.039 killing process with pid 90967 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90967' 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 90967 00:26:03.039 09:25:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 90967 00:26:04.945 ************************************ 00:26:04.945 END TEST bdev_nbd 00:26:04.945 ************************************ 00:26:04.945 09:25:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:26:04.945 00:26:04.945 real 0m6.948s 00:26:04.945 user 0m9.964s 00:26:04.945 sys 0m1.478s 00:26:04.945 09:25:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:04.945 09:25:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:04.945 09:25:48 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:26:04.945 09:25:48 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:26:04.945 09:25:48 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:26:04.945 09:25:48 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:26:04.945 09:25:48 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:04.945 09:25:48 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:04.945 09:25:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:04.945 ************************************ 00:26:04.945 START TEST bdev_fio 00:26:04.945 ************************************ 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:26:04.945 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:04.945 ************************************ 00:26:04.945 START TEST bdev_fio_rw_verify 00:26:04.945 ************************************ 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:04.945 09:25:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:04.946 09:25:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:04.946 09:25:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:26:04.946 09:25:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:04.946 09:25:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:05.204 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:05.204 fio-3.35 00:26:05.204 Starting 1 thread 00:26:17.409 00:26:17.409 job_raid5f: (groupid=0, jobs=1): err= 0: pid=91182: Tue Oct 15 09:26:00 2024 00:26:17.409 read: IOPS=8089, BW=31.6MiB/s (33.1MB/s)(316MiB/10001msec) 00:26:17.409 slat (usec): min=23, max=158, avg=30.71, stdev= 5.87 00:26:17.409 clat (usec): min=14, max=726, avg=195.59, stdev=76.26 00:26:17.409 lat (usec): min=43, max=770, avg=226.31, stdev=77.61 00:26:17.409 clat percentiles (usec): 00:26:17.409 | 50.000th=[ 198], 99.000th=[ 379], 99.900th=[ 486], 99.990th=[ 660], 00:26:17.409 | 99.999th=[ 725] 00:26:17.409 write: IOPS=8482, BW=33.1MiB/s (34.7MB/s)(328MiB/9893msec); 0 zone resets 00:26:17.409 slat (usec): min=12, max=173, avg=24.62, stdev= 6.85 00:26:17.409 clat (usec): min=84, max=1104, avg=451.45, stdev=69.16 00:26:17.409 lat (usec): min=106, max=1182, avg=476.07, stdev=72.08 00:26:17.409 clat percentiles (usec): 00:26:17.409 | 50.000th=[ 453], 99.000th=[ 685], 99.900th=[ 791], 99.990th=[ 947], 00:26:17.409 | 99.999th=[ 1106] 00:26:17.409 bw ( KiB/s): min=29160, max=36344, per=99.10%, avg=33624.00, stdev=2052.09, samples=19 00:26:17.409 iops : min= 7290, max= 9086, avg=8406.00, stdev=513.02, samples=19 00:26:17.409 lat (usec) : 20=0.01%, 100=5.55%, 250=29.87%, 500=56.03%, 750=8.45% 00:26:17.409 lat (usec) : 1000=0.10% 00:26:17.409 lat (msec) : 2=0.01% 00:26:17.409 cpu : usr=98.60%, sys=0.54%, ctx=15, majf=0, minf=7099 00:26:17.409 IO depths : 1=7.8%, 2=19.9%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.409 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.409 issued rwts: total=80904,83913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.409 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:17.409 00:26:17.409 Run status group 0 (all jobs): 00:26:17.410 READ: bw=31.6MiB/s (33.1MB/s), 31.6MiB/s-31.6MiB/s (33.1MB/s-33.1MB/s), io=316MiB (331MB), run=10001-10001msec 00:26:17.410 WRITE: bw=33.1MiB/s (34.7MB/s), 33.1MiB/s-33.1MiB/s (34.7MB/s-34.7MB/s), io=328MiB (344MB), run=9893-9893msec 00:26:17.977 ----------------------------------------------------- 00:26:17.977 Suppressions used: 00:26:17.977 count bytes template 00:26:17.977 1 7 /usr/src/fio/parse.c 00:26:17.977 488 46848 /usr/src/fio/iolog.c 00:26:17.977 1 8 libtcmalloc_minimal.so 00:26:17.977 1 904 libcrypto.so 00:26:17.977 ----------------------------------------------------- 00:26:17.977 00:26:17.977 00:26:17.977 real 0m13.090s 00:26:17.977 user 0m13.345s 00:26:17.977 sys 0m0.978s 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:17.977 ************************************ 00:26:17.977 END TEST bdev_fio_rw_verify 00:26:17.977 ************************************ 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "744f3aca-b2b8-44f7-b5b1-fe524c3f530c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "744f3aca-b2b8-44f7-b5b1-fe524c3f530c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "744f3aca-b2b8-44f7-b5b1-fe524c3f530c",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "835ed942-9fd3-41c7-8f38-c924a5c7e659",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "d8bb6090-6cad-4841-a1b5-f8839b699880",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "ebdb30f4-eaee-462d-b0d1-f7046af84275",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:17.977 /home/vagrant/spdk_repo/spdk 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:26:17.977 00:26:17.977 real 0m13.326s 00:26:17.977 user 0m13.442s 00:26:17.977 sys 0m1.079s 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:17.977 09:26:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:17.977 ************************************ 00:26:17.977 END TEST bdev_fio 00:26:17.977 ************************************ 00:26:17.977 09:26:01 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:17.977 09:26:01 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:17.977 09:26:01 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:26:17.977 09:26:01 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:17.977 09:26:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:17.977 ************************************ 00:26:17.977 START TEST bdev_verify 00:26:17.977 ************************************ 00:26:17.977 09:26:01 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:18.235 [2024-10-15 09:26:02.006555] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:26:18.235 [2024-10-15 09:26:02.006800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91343 ] 00:26:18.493 [2024-10-15 09:26:02.190968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:18.493 [2024-10-15 09:26:02.369203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.494 [2024-10-15 09:26:02.369213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.060 Running I/O for 5 seconds... 00:26:21.394 11974.00 IOPS, 46.77 MiB/s [2024-10-15T09:26:06.257Z] 12537.00 IOPS, 48.97 MiB/s [2024-10-15T09:26:07.192Z] 12313.00 IOPS, 48.10 MiB/s [2024-10-15T09:26:08.152Z] 12427.00 IOPS, 48.54 MiB/s [2024-10-15T09:26:08.152Z] 12497.00 IOPS, 48.82 MiB/s 00:26:24.224 Latency(us) 00:26:24.224 [2024-10-15T09:26:08.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.224 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:24.224 Verification LBA range: start 0x0 length 0x2000 00:26:24.224 raid5f : 5.02 6262.70 24.46 0.00 0.00 30783.48 296.03 23831.27 00:26:24.224 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:24.224 Verification LBA range: start 0x2000 length 0x2000 00:26:24.224 raid5f : 5.03 6228.28 24.33 0.00 0.00 31028.05 144.29 23712.12 00:26:24.224 [2024-10-15T09:26:08.152Z] =================================================================================================================== 00:26:24.224 [2024-10-15T09:26:08.152Z] Total : 12490.98 48.79 0.00 0.00 30905.53 144.29 23831.27 00:26:25.598 00:26:25.598 real 0m7.573s 00:26:25.598 user 0m13.726s 00:26:25.598 sys 0m0.424s 00:26:25.598 09:26:09 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:25.598 ************************************ 00:26:25.598 END TEST bdev_verify 00:26:25.598 09:26:09 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:26:25.598 ************************************ 00:26:25.598 09:26:09 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:25.598 09:26:09 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:26:25.598 09:26:09 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:25.598 09:26:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:25.598 ************************************ 00:26:25.598 START TEST bdev_verify_big_io 00:26:25.598 ************************************ 00:26:25.598 09:26:09 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:25.857 [2024-10-15 09:26:09.638037] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:26:25.857 [2024-10-15 09:26:09.638264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91436 ] 00:26:26.116 [2024-10-15 09:26:09.817884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:26.116 [2024-10-15 09:26:09.975162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.116 [2024-10-15 09:26:09.975173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.682 Running I/O for 5 seconds... 00:26:29.023 506.00 IOPS, 31.62 MiB/s [2024-10-15T09:26:13.885Z] 633.00 IOPS, 39.56 MiB/s [2024-10-15T09:26:14.820Z] 654.33 IOPS, 40.90 MiB/s [2024-10-15T09:26:15.755Z] 650.00 IOPS, 40.62 MiB/s [2024-10-15T09:26:16.014Z] 660.00 IOPS, 41.25 MiB/s 00:26:32.086 Latency(us) 00:26:32.086 [2024-10-15T09:26:16.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.086 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:32.086 Verification LBA range: start 0x0 length 0x200 00:26:32.086 raid5f : 5.35 332.22 20.76 0.00 0.00 9525846.86 199.21 423243.40 00:26:32.086 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:32.086 Verification LBA range: start 0x200 length 0x200 00:26:32.086 raid5f : 5.34 332.91 20.81 0.00 0.00 9479972.58 294.17 421336.90 00:26:32.086 [2024-10-15T09:26:16.014Z] =================================================================================================================== 00:26:32.086 [2024-10-15T09:26:16.014Z] Total : 665.13 41.57 0.00 0.00 9502909.72 199.21 423243.40 00:26:33.464 ************************************ 00:26:33.464 END TEST bdev_verify_big_io 00:26:33.464 ************************************ 00:26:33.464 00:26:33.464 real 0m7.843s 00:26:33.464 user 0m14.351s 00:26:33.464 sys 0m0.376s 00:26:33.464 09:26:17 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:33.464 09:26:17 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:26:33.723 09:26:17 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:33.723 09:26:17 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:26:33.723 09:26:17 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:33.723 09:26:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:33.723 ************************************ 00:26:33.723 START TEST bdev_write_zeroes 00:26:33.723 ************************************ 00:26:33.723 09:26:17 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:33.723 [2024-10-15 09:26:17.530520] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:26:33.723 [2024-10-15 09:26:17.530717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91541 ] 00:26:33.982 [2024-10-15 09:26:17.706607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.982 [2024-10-15 09:26:17.858667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.548 Running I/O for 1 seconds... 00:26:35.925 19239.00 IOPS, 75.15 MiB/s 00:26:35.925 Latency(us) 00:26:35.925 [2024-10-15T09:26:19.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.925 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:35.925 raid5f : 1.01 19220.59 75.08 0.00 0.00 6632.05 2085.24 8996.31 00:26:35.925 [2024-10-15T09:26:19.853Z] =================================================================================================================== 00:26:35.925 [2024-10-15T09:26:19.853Z] Total : 19220.59 75.08 0.00 0.00 6632.05 2085.24 8996.31 00:26:37.301 00:26:37.301 real 0m3.437s 00:26:37.301 user 0m2.936s 00:26:37.301 sys 0m0.368s 00:26:37.301 09:26:20 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:37.301 09:26:20 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:26:37.301 ************************************ 00:26:37.301 END TEST bdev_write_zeroes 00:26:37.301 ************************************ 00:26:37.301 09:26:20 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:37.301 09:26:20 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:26:37.301 09:26:20 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:37.301 09:26:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:37.301 ************************************ 00:26:37.301 START TEST bdev_json_nonenclosed 00:26:37.301 ************************************ 00:26:37.301 09:26:20 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:37.301 [2024-10-15 09:26:21.009659] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:26:37.301 [2024-10-15 09:26:21.009822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91593 ] 00:26:37.301 [2024-10-15 09:26:21.181459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.560 [2024-10-15 09:26:21.351144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.560 [2024-10-15 09:26:21.351298] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:26:37.560 [2024-10-15 09:26:21.351350] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:37.560 [2024-10-15 09:26:21.351368] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:37.818 00:26:37.818 real 0m0.726s 00:26:37.818 user 0m0.483s 00:26:37.818 sys 0m0.139s 00:26:37.818 09:26:21 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:37.818 09:26:21 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:26:37.818 ************************************ 00:26:37.818 END TEST bdev_json_nonenclosed 00:26:37.818 ************************************ 00:26:37.818 09:26:21 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:37.818 09:26:21 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:26:37.818 09:26:21 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:37.819 09:26:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:37.819 ************************************ 00:26:37.819 START TEST bdev_json_nonarray 00:26:37.819 ************************************ 00:26:37.819 09:26:21 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:38.077 [2024-10-15 09:26:21.785765] Starting SPDK v25.01-pre git sha1 aa3f30c36 / DPDK 24.03.0 initialization... 00:26:38.077 [2024-10-15 09:26:21.785988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91620 ] 00:26:38.077 [2024-10-15 09:26:21.955179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.335 [2024-10-15 09:26:22.100114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.335 [2024-10-15 09:26:22.100290] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:26:38.335 [2024-10-15 09:26:22.100325] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:38.335 [2024-10-15 09:26:22.100352] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:38.594 00:26:38.594 real 0m0.702s 00:26:38.594 user 0m0.442s 00:26:38.594 sys 0m0.155s 00:26:38.594 09:26:22 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:38.594 ************************************ 00:26:38.594 END TEST bdev_json_nonarray 00:26:38.594 ************************************ 00:26:38.594 09:26:22 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:26:38.594 09:26:22 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:26:38.594 09:26:22 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:26:38.594 09:26:22 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:26:38.594 09:26:22 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:26:38.594 09:26:22 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:26:38.594 09:26:22 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:26:38.594 09:26:22 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:38.594 09:26:22 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:26:38.594 09:26:22 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:26:38.594 09:26:22 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:26:38.594 09:26:22 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:26:38.594 00:26:38.594 real 0m51.175s 00:26:38.594 user 1m9.593s 00:26:38.594 sys 0m5.943s 00:26:38.594 09:26:22 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:38.594 ************************************ 00:26:38.594 END TEST blockdev_raid5f 00:26:38.594 ************************************ 00:26:38.594 09:26:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:38.594 09:26:22 -- spdk/autotest.sh@194 -- # uname -s 00:26:38.594 09:26:22 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:26:38.594 09:26:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:26:38.594 09:26:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:26:38.594 09:26:22 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:26:38.594 09:26:22 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:26:38.594 09:26:22 -- spdk/autotest.sh@256 -- # timing_exit lib 00:26:38.594 09:26:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:38.594 09:26:22 -- common/autotest_common.sh@10 -- # set +x 00:26:38.852 09:26:22 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:26:38.852 09:26:22 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:26:38.852 09:26:22 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:26:38.852 09:26:22 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:26:38.852 09:26:22 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:38.852 09:26:22 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:26:38.852 09:26:22 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:26:38.852 09:26:22 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:26:38.852 09:26:22 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:26:38.852 09:26:22 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:26:38.852 09:26:22 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:38.852 09:26:22 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:38.852 09:26:22 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:26:38.852 09:26:22 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:38.852 09:26:22 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:26:38.852 09:26:22 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:38.852 09:26:22 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:38.852 09:26:22 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:26:38.852 09:26:22 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:26:38.852 09:26:22 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:26:38.852 09:26:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:38.852 09:26:22 -- common/autotest_common.sh@10 -- # set +x 00:26:38.852 09:26:22 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:26:38.852 09:26:22 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:26:38.852 09:26:22 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:26:38.852 09:26:22 -- common/autotest_common.sh@10 -- # set +x 00:26:40.225 INFO: APP EXITING 00:26:40.225 INFO: killing all VMs 00:26:40.225 INFO: killing vhost app 00:26:40.225 INFO: EXIT DONE 00:26:40.794 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:40.794 Waiting for block devices as requested 00:26:40.794 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:40.794 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:41.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:41.729 Cleaning 00:26:41.729 Removing: /var/run/dpdk/spdk0/config 00:26:41.729 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:41.729 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:41.729 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:41.729 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:41.729 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:41.729 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:41.729 Removing: /dev/shm/spdk_tgt_trace.pid56905 00:26:41.729 Removing: /var/run/dpdk/spdk0 00:26:41.729 Removing: /var/run/dpdk/spdk_pid56665 00:26:41.729 Removing: /var/run/dpdk/spdk_pid56905 00:26:41.729 Removing: /var/run/dpdk/spdk_pid57140 00:26:41.729 Removing: /var/run/dpdk/spdk_pid57249 00:26:41.729 Removing: /var/run/dpdk/spdk_pid57305 00:26:41.729 Removing: /var/run/dpdk/spdk_pid57439 00:26:41.729 Removing: /var/run/dpdk/spdk_pid57468 00:26:41.729 Removing: /var/run/dpdk/spdk_pid57678 00:26:41.729 Removing: /var/run/dpdk/spdk_pid57795 00:26:41.729 Removing: /var/run/dpdk/spdk_pid57902 00:26:41.729 Removing: /var/run/dpdk/spdk_pid58026 00:26:41.729 Removing: /var/run/dpdk/spdk_pid58140 00:26:41.729 Removing: /var/run/dpdk/spdk_pid58179 00:26:41.729 Removing: /var/run/dpdk/spdk_pid58221 00:26:41.729 Removing: /var/run/dpdk/spdk_pid58297 00:26:41.729 Removing: /var/run/dpdk/spdk_pid58405 00:26:41.729 Removing: /var/run/dpdk/spdk_pid58896 00:26:41.729 Removing: /var/run/dpdk/spdk_pid58982 00:26:41.729 Removing: /var/run/dpdk/spdk_pid59056 00:26:41.729 Removing: /var/run/dpdk/spdk_pid59078 00:26:41.729 Removing: /var/run/dpdk/spdk_pid59242 00:26:41.729 Removing: /var/run/dpdk/spdk_pid59264 00:26:41.729 Removing: /var/run/dpdk/spdk_pid59423 00:26:41.729 Removing: /var/run/dpdk/spdk_pid59439 00:26:41.729 Removing: /var/run/dpdk/spdk_pid59514 00:26:41.729 Removing: /var/run/dpdk/spdk_pid59532 00:26:41.729 Removing: /var/run/dpdk/spdk_pid59605 00:26:41.729 Removing: /var/run/dpdk/spdk_pid59625 00:26:41.729 Removing: /var/run/dpdk/spdk_pid59826 00:26:41.729 Removing: /var/run/dpdk/spdk_pid59862 00:26:41.729 Removing: /var/run/dpdk/spdk_pid59951 00:26:41.729 Removing: /var/run/dpdk/spdk_pid61333 00:26:41.729 Removing: /var/run/dpdk/spdk_pid61550 00:26:41.729 Removing: /var/run/dpdk/spdk_pid61696 00:26:41.729 Removing: /var/run/dpdk/spdk_pid62356 00:26:41.729 Removing: /var/run/dpdk/spdk_pid62567 00:26:41.729 Removing: /var/run/dpdk/spdk_pid62713 00:26:41.729 Removing: /var/run/dpdk/spdk_pid63373 00:26:41.729 Removing: /var/run/dpdk/spdk_pid63708 00:26:41.729 Removing: /var/run/dpdk/spdk_pid63855 00:26:41.729 Removing: /var/run/dpdk/spdk_pid65273 00:26:41.729 Removing: /var/run/dpdk/spdk_pid65532 00:26:41.729 Removing: /var/run/dpdk/spdk_pid65683 00:26:41.729 Removing: /var/run/dpdk/spdk_pid67106 00:26:41.729 Removing: /var/run/dpdk/spdk_pid67366 00:26:41.729 Removing: /var/run/dpdk/spdk_pid67506 00:26:41.729 Removing: /var/run/dpdk/spdk_pid68931 00:26:41.729 Removing: /var/run/dpdk/spdk_pid69391 00:26:41.729 Removing: /var/run/dpdk/spdk_pid69532 00:26:41.729 Removing: /var/run/dpdk/spdk_pid71054 00:26:41.729 Removing: /var/run/dpdk/spdk_pid71324 00:26:41.729 Removing: /var/run/dpdk/spdk_pid71475 00:26:41.729 Removing: /var/run/dpdk/spdk_pid72996 00:26:41.729 Removing: /var/run/dpdk/spdk_pid73261 00:26:41.729 Removing: /var/run/dpdk/spdk_pid73412 00:26:41.729 Removing: /var/run/dpdk/spdk_pid74931 00:26:41.729 Removing: /var/run/dpdk/spdk_pid75429 00:26:41.729 Removing: /var/run/dpdk/spdk_pid75575 00:26:41.729 Removing: /var/run/dpdk/spdk_pid75719 00:26:41.729 Removing: /var/run/dpdk/spdk_pid76179 00:26:41.729 Removing: /var/run/dpdk/spdk_pid76945 00:26:41.729 Removing: /var/run/dpdk/spdk_pid77357 00:26:41.729 Removing: /var/run/dpdk/spdk_pid78076 00:26:41.729 Removing: /var/run/dpdk/spdk_pid78558 00:26:41.729 Removing: /var/run/dpdk/spdk_pid79355 00:26:41.729 Removing: /var/run/dpdk/spdk_pid79782 00:26:41.729 Removing: /var/run/dpdk/spdk_pid81796 00:26:41.729 Removing: /var/run/dpdk/spdk_pid82245 00:26:41.987 Removing: /var/run/dpdk/spdk_pid82698 00:26:41.987 Removing: /var/run/dpdk/spdk_pid84830 00:26:41.987 Removing: /var/run/dpdk/spdk_pid85321 00:26:41.987 Removing: /var/run/dpdk/spdk_pid85829 00:26:41.987 Removing: /var/run/dpdk/spdk_pid86907 00:26:41.987 Removing: /var/run/dpdk/spdk_pid87241 00:26:41.987 Removing: /var/run/dpdk/spdk_pid88198 00:26:41.987 Removing: /var/run/dpdk/spdk_pid88534 00:26:41.987 Removing: /var/run/dpdk/spdk_pid89493 00:26:41.987 Removing: /var/run/dpdk/spdk_pid89827 00:26:41.987 Removing: /var/run/dpdk/spdk_pid90513 00:26:41.987 Removing: /var/run/dpdk/spdk_pid90792 00:26:41.987 Removing: /var/run/dpdk/spdk_pid90858 00:26:41.987 Removing: /var/run/dpdk/spdk_pid90906 00:26:41.987 Removing: /var/run/dpdk/spdk_pid91167 00:26:41.987 Removing: /var/run/dpdk/spdk_pid91343 00:26:41.987 Removing: /var/run/dpdk/spdk_pid91436 00:26:41.987 Removing: /var/run/dpdk/spdk_pid91541 00:26:41.987 Removing: /var/run/dpdk/spdk_pid91593 00:26:41.987 Removing: /var/run/dpdk/spdk_pid91620 00:26:41.987 Clean 00:26:41.987 09:26:25 -- common/autotest_common.sh@1451 -- # return 0 00:26:41.988 09:26:25 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:26:41.988 09:26:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:41.988 09:26:25 -- common/autotest_common.sh@10 -- # set +x 00:26:41.988 09:26:25 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:26:41.988 09:26:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:41.988 09:26:25 -- common/autotest_common.sh@10 -- # set +x 00:26:41.988 09:26:25 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:41.988 09:26:25 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:41.988 09:26:25 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:41.988 09:26:25 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:26:41.988 09:26:25 -- spdk/autotest.sh@394 -- # hostname 00:26:41.988 09:26:25 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:42.247 geninfo: WARNING: invalid characters removed from testname! 00:27:08.825 09:26:51 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:12.109 09:26:55 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:14.641 09:26:58 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:17.188 09:27:00 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:19.718 09:27:03 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:22.280 09:27:05 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:24.813 09:27:08 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:24.813 09:27:08 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:27:24.813 09:27:08 -- common/autotest_common.sh@1691 -- $ lcov --version 00:27:24.813 09:27:08 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:27:24.813 09:27:08 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:27:24.813 09:27:08 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:27:24.813 09:27:08 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:27:24.813 09:27:08 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:27:24.813 09:27:08 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:24.813 09:27:08 -- scripts/common.sh@336 -- $ read -ra ver1 00:27:24.813 09:27:08 -- scripts/common.sh@337 -- $ IFS=.-: 00:27:24.813 09:27:08 -- scripts/common.sh@337 -- $ read -ra ver2 00:27:24.813 09:27:08 -- scripts/common.sh@338 -- $ local 'op=<' 00:27:24.813 09:27:08 -- scripts/common.sh@340 -- $ ver1_l=2 00:27:24.813 09:27:08 -- scripts/common.sh@341 -- $ ver2_l=1 00:27:24.813 09:27:08 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:27:24.813 09:27:08 -- scripts/common.sh@344 -- $ case "$op" in 00:27:24.813 09:27:08 -- scripts/common.sh@345 -- $ : 1 00:27:24.813 09:27:08 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:27:24.813 09:27:08 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:24.813 09:27:08 -- scripts/common.sh@365 -- $ decimal 1 00:27:24.813 09:27:08 -- scripts/common.sh@353 -- $ local d=1 00:27:24.813 09:27:08 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:24.813 09:27:08 -- scripts/common.sh@355 -- $ echo 1 00:27:24.813 09:27:08 -- scripts/common.sh@365 -- $ ver1[v]=1 00:27:24.813 09:27:08 -- scripts/common.sh@366 -- $ decimal 2 00:27:24.813 09:27:08 -- scripts/common.sh@353 -- $ local d=2 00:27:24.813 09:27:08 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:24.813 09:27:08 -- scripts/common.sh@355 -- $ echo 2 00:27:24.813 09:27:08 -- scripts/common.sh@366 -- $ ver2[v]=2 00:27:24.813 09:27:08 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:27:24.813 09:27:08 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:27:24.813 09:27:08 -- scripts/common.sh@368 -- $ return 0 00:27:24.813 09:27:08 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:24.813 09:27:08 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:27:24.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.813 --rc genhtml_branch_coverage=1 00:27:24.813 --rc genhtml_function_coverage=1 00:27:24.813 --rc genhtml_legend=1 00:27:24.813 --rc geninfo_all_blocks=1 00:27:24.813 --rc geninfo_unexecuted_blocks=1 00:27:24.813 00:27:24.813 ' 00:27:24.813 09:27:08 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:27:24.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.813 --rc genhtml_branch_coverage=1 00:27:24.813 --rc genhtml_function_coverage=1 00:27:24.813 --rc genhtml_legend=1 00:27:24.813 --rc geninfo_all_blocks=1 00:27:24.813 --rc geninfo_unexecuted_blocks=1 00:27:24.813 00:27:24.813 ' 00:27:24.813 09:27:08 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:27:24.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.813 --rc genhtml_branch_coverage=1 00:27:24.813 --rc genhtml_function_coverage=1 00:27:24.813 --rc genhtml_legend=1 00:27:24.813 --rc geninfo_all_blocks=1 00:27:24.813 --rc geninfo_unexecuted_blocks=1 00:27:24.813 00:27:24.813 ' 00:27:24.813 09:27:08 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:27:24.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.813 --rc genhtml_branch_coverage=1 00:27:24.813 --rc genhtml_function_coverage=1 00:27:24.813 --rc genhtml_legend=1 00:27:24.813 --rc geninfo_all_blocks=1 00:27:24.813 --rc geninfo_unexecuted_blocks=1 00:27:24.813 00:27:24.813 ' 00:27:24.813 09:27:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:24.814 09:27:08 -- scripts/common.sh@15 -- $ shopt -s extglob 00:27:24.814 09:27:08 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:24.814 09:27:08 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.814 09:27:08 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.814 09:27:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.814 09:27:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.814 09:27:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.814 09:27:08 -- paths/export.sh@5 -- $ export PATH 00:27:24.814 09:27:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.814 09:27:08 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:24.814 09:27:08 -- common/autobuild_common.sh@486 -- $ date +%s 00:27:24.814 09:27:08 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728984428.XXXXXX 00:27:24.814 09:27:08 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728984428.5hz3vy 00:27:24.814 09:27:08 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:27:24.814 09:27:08 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:27:24.814 09:27:08 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:27:24.814 09:27:08 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:24.814 09:27:08 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:24.814 09:27:08 -- common/autobuild_common.sh@502 -- $ get_config_params 00:27:24.814 09:27:08 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:27:24.814 09:27:08 -- common/autotest_common.sh@10 -- $ set +x 00:27:24.814 09:27:08 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:27:24.814 09:27:08 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:27:24.814 09:27:08 -- pm/common@17 -- $ local monitor 00:27:24.814 09:27:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:24.814 09:27:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:24.814 09:27:08 -- pm/common@25 -- $ sleep 1 00:27:24.814 09:27:08 -- pm/common@21 -- $ date +%s 00:27:24.814 09:27:08 -- pm/common@21 -- $ date +%s 00:27:24.814 09:27:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728984428 00:27:24.814 09:27:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728984428 00:27:24.814 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728984428_collect-vmstat.pm.log 00:27:24.814 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728984428_collect-cpu-load.pm.log 00:27:25.751 09:27:09 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:27:25.751 09:27:09 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:27:25.751 09:27:09 -- spdk/autopackage.sh@14 -- $ timing_finish 00:27:25.751 09:27:09 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:25.751 09:27:09 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:27:25.751 09:27:09 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:25.751 09:27:09 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:27:25.751 09:27:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:27:25.751 09:27:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:27:25.751 09:27:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:25.751 09:27:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:27:25.751 09:27:09 -- pm/common@44 -- $ pid=93103 00:27:25.751 09:27:09 -- pm/common@50 -- $ kill -TERM 93103 00:27:25.751 09:27:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:25.751 09:27:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:27:25.751 09:27:09 -- pm/common@44 -- $ pid=93105 00:27:25.751 09:27:09 -- pm/common@50 -- $ kill -TERM 93105 00:27:25.751 + [[ -n 5203 ]] 00:27:25.751 + sudo kill 5203 00:27:26.018 [Pipeline] } 00:27:26.034 [Pipeline] // timeout 00:27:26.039 [Pipeline] } 00:27:26.053 [Pipeline] // stage 00:27:26.059 [Pipeline] } 00:27:26.075 [Pipeline] // catchError 00:27:26.097 [Pipeline] stage 00:27:26.101 [Pipeline] { (Stop VM) 00:27:26.134 [Pipeline] sh 00:27:26.415 + vagrant halt 00:27:30.660 ==> default: Halting domain... 00:27:35.937 [Pipeline] sh 00:27:36.218 + vagrant destroy -f 00:27:39.504 ==> default: Removing domain... 00:27:39.774 [Pipeline] sh 00:27:40.081 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:27:40.090 [Pipeline] } 00:27:40.106 [Pipeline] // stage 00:27:40.111 [Pipeline] } 00:27:40.126 [Pipeline] // dir 00:27:40.130 [Pipeline] } 00:27:40.145 [Pipeline] // wrap 00:27:40.151 [Pipeline] } 00:27:40.165 [Pipeline] // catchError 00:27:40.175 [Pipeline] stage 00:27:40.177 [Pipeline] { (Epilogue) 00:27:40.190 [Pipeline] sh 00:27:40.472 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:47.048 [Pipeline] catchError 00:27:47.050 [Pipeline] { 00:27:47.062 [Pipeline] sh 00:27:47.344 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:47.344 Artifacts sizes are good 00:27:47.353 [Pipeline] } 00:27:47.368 [Pipeline] // catchError 00:27:47.379 [Pipeline] archiveArtifacts 00:27:47.386 Archiving artifacts 00:27:47.485 [Pipeline] cleanWs 00:27:47.497 [WS-CLEANUP] Deleting project workspace... 00:27:47.497 [WS-CLEANUP] Deferred wipeout is used... 00:27:47.504 [WS-CLEANUP] done 00:27:47.506 [Pipeline] } 00:27:47.523 [Pipeline] // stage 00:27:47.529 [Pipeline] } 00:27:47.544 [Pipeline] // node 00:27:47.550 [Pipeline] End of Pipeline 00:27:47.596 Finished: SUCCESS